00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2336 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3601 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.047 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.048 The recommended git tool is: git 00:00:00.048 using credential 00000000-0000-0000-0000-000000000002 00:00:00.050 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.070 Fetching changes from the remote Git repository 00:00:00.071 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.100 Using shallow fetch with depth 1 00:00:00.100 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.100 > git --version # timeout=10 00:00:00.135 > git --version # 'git version 2.39.2' 00:00:00.135 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.171 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.171 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.544 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.557 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.569 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:03.569 > git config core.sparsecheckout # timeout=10 00:00:03.580 > git read-tree -mu HEAD # timeout=10 00:00:03.596 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:03.614 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:03.614 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:03.735 [Pipeline] Start of Pipeline 00:00:03.751 [Pipeline] library 00:00:03.753 Loading library shm_lib@master 00:00:03.753 Library shm_lib@master is cached. Copying from home. 00:00:03.765 [Pipeline] node 00:00:03.778 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.780 [Pipeline] { 00:00:03.789 [Pipeline] catchError 00:00:03.790 [Pipeline] { 00:00:03.800 [Pipeline] wrap 00:00:03.808 [Pipeline] { 00:00:03.813 [Pipeline] stage 00:00:03.814 [Pipeline] { (Prologue) 00:00:03.828 [Pipeline] echo 00:00:03.828 Node: VM-host-WFP7 00:00:03.832 [Pipeline] cleanWs 00:00:03.841 [WS-CLEANUP] Deleting project workspace... 00:00:03.841 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.848 [WS-CLEANUP] done 00:00:04.054 [Pipeline] setCustomBuildProperty 00:00:04.153 [Pipeline] httpRequest 00:00:04.489 [Pipeline] echo 00:00:04.490 Sorcerer 10.211.164.101 is alive 00:00:04.500 [Pipeline] retry 00:00:04.502 [Pipeline] { 00:00:04.516 [Pipeline] httpRequest 00:00:04.520 HttpMethod: GET 00:00:04.521 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:04.521 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:04.522 Response Code: HTTP/1.1 200 OK 00:00:04.523 Success: Status code 200 is in the accepted range: 200,404 00:00:04.523 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:04.669 [Pipeline] } 00:00:04.688 [Pipeline] // retry 00:00:04.696 [Pipeline] sh 00:00:04.983 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:04.999 [Pipeline] httpRequest 00:00:05.379 [Pipeline] echo 00:00:05.381 Sorcerer 10.211.164.101 is alive 00:00:05.392 [Pipeline] retry 00:00:05.394 [Pipeline] { 00:00:05.410 [Pipeline] httpRequest 00:00:05.415 HttpMethod: GET 00:00:05.416 URL: http://10.211.164.101/packages/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:05.416 Sending request to url: http://10.211.164.101/packages/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:05.418 Response Code: HTTP/1.1 200 OK 00:00:05.420 Success: Status code 200 is in the accepted range: 200,404 00:00:05.421 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:23.629 [Pipeline] } 00:00:23.645 [Pipeline] // retry 00:00:23.651 [Pipeline] sh 00:00:23.937 + tar --no-same-owner -xf spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:26.524 [Pipeline] sh 00:00:26.809 + git -C spdk log --oneline -n5 00:00:26.809 fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:00:26.809 12fc2abf1 test: Remove autopackage.sh 00:00:26.809 83ba90867 fio/bdev: fix typo in README 00:00:26.809 45379ed84 module/compress: Cleanup vol data, when claim fails 00:00:26.809 0afe95a3a bdev/nvme: use bdev_nvme linker script 00:00:26.831 [Pipeline] withCredentials 00:00:26.842 > git --version # timeout=10 00:00:26.852 > git --version # 'git version 2.39.2' 00:00:26.869 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:26.871 [Pipeline] { 00:00:26.880 [Pipeline] retry 00:00:26.882 [Pipeline] { 00:00:26.896 [Pipeline] sh 00:00:27.183 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:27.777 [Pipeline] } 00:00:27.795 [Pipeline] // retry 00:00:27.800 [Pipeline] } 00:00:27.816 [Pipeline] // withCredentials 00:00:27.827 [Pipeline] httpRequest 00:00:28.262 [Pipeline] echo 00:00:28.267 Sorcerer 10.211.164.101 is alive 00:00:28.302 [Pipeline] retry 00:00:28.304 [Pipeline] { 00:00:28.312 [Pipeline] httpRequest 00:00:28.315 HttpMethod: GET 00:00:28.316 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:28.316 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:28.318 Response Code: HTTP/1.1 200 OK 00:00:28.319 Success: Status code 200 is in the accepted range: 200,404 00:00:28.319 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:38.170 [Pipeline] } 00:00:38.188 [Pipeline] // retry 00:00:38.196 [Pipeline] sh 00:00:38.481 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:39.888 [Pipeline] sh 00:00:40.172 + git -C dpdk log --oneline -n5 00:00:40.172 caf0f5d395 version: 22.11.4 00:00:40.172 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:40.172 dc9c799c7d vhost: fix missing spinlock unlock 00:00:40.172 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:40.172 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:40.191 [Pipeline] writeFile 00:00:40.206 [Pipeline] sh 00:00:40.492 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:40.505 [Pipeline] sh 00:00:40.791 + cat autorun-spdk.conf 00:00:40.791 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.791 SPDK_RUN_ASAN=1 00:00:40.791 SPDK_RUN_UBSAN=1 00:00:40.791 SPDK_TEST_RAID=1 00:00:40.791 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:40.791 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:40.791 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:40.799 RUN_NIGHTLY=1 00:00:40.801 [Pipeline] } 00:00:40.814 [Pipeline] // stage 00:00:40.829 [Pipeline] stage 00:00:40.831 [Pipeline] { (Run VM) 00:00:40.844 [Pipeline] sh 00:00:41.127 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:41.127 + echo 'Start stage prepare_nvme.sh' 00:00:41.127 Start stage prepare_nvme.sh 00:00:41.127 + [[ -n 1 ]] 00:00:41.127 + disk_prefix=ex1 00:00:41.127 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:41.127 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:41.127 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:41.127 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.127 ++ SPDK_RUN_ASAN=1 00:00:41.127 ++ SPDK_RUN_UBSAN=1 00:00:41.127 ++ SPDK_TEST_RAID=1 00:00:41.127 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:41.127 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:41.127 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:41.127 ++ RUN_NIGHTLY=1 00:00:41.127 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:41.127 + nvme_files=() 00:00:41.127 + declare -A nvme_files 00:00:41.127 + backend_dir=/var/lib/libvirt/images/backends 00:00:41.127 + nvme_files['nvme.img']=5G 00:00:41.127 + nvme_files['nvme-cmb.img']=5G 00:00:41.127 + nvme_files['nvme-multi0.img']=4G 00:00:41.127 + nvme_files['nvme-multi1.img']=4G 00:00:41.127 + nvme_files['nvme-multi2.img']=4G 00:00:41.127 + nvme_files['nvme-openstack.img']=8G 00:00:41.127 + nvme_files['nvme-zns.img']=5G 00:00:41.127 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:41.127 + (( SPDK_TEST_FTL == 1 )) 00:00:41.127 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:41.127 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:41.127 + for nvme in "${!nvme_files[@]}" 00:00:41.127 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:41.127 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:41.127 + for nvme in "${!nvme_files[@]}" 00:00:41.127 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:41.127 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:41.127 + for nvme in "${!nvme_files[@]}" 00:00:41.127 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:41.127 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:41.127 + for nvme in "${!nvme_files[@]}" 00:00:41.127 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:41.127 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:41.127 + for nvme in "${!nvme_files[@]}" 00:00:41.127 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:41.127 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:41.127 + for nvme in "${!nvme_files[@]}" 00:00:41.127 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:41.127 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:41.127 + for nvme in "${!nvme_files[@]}" 00:00:41.127 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:41.386 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:41.386 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:41.386 + echo 'End stage prepare_nvme.sh' 00:00:41.386 End stage prepare_nvme.sh 00:00:41.396 [Pipeline] sh 00:00:41.676 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:41.676 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:41.676 00:00:41.676 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:41.676 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:41.676 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:41.676 HELP=0 00:00:41.676 DRY_RUN=0 00:00:41.676 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:41.676 NVME_DISKS_TYPE=nvme,nvme, 00:00:41.676 NVME_AUTO_CREATE=0 00:00:41.676 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:41.676 NVME_CMB=,, 00:00:41.676 NVME_PMR=,, 00:00:41.676 NVME_ZNS=,, 00:00:41.676 NVME_MS=,, 00:00:41.676 NVME_FDP=,, 00:00:41.676 SPDK_VAGRANT_DISTRO=fedora39 00:00:41.676 SPDK_VAGRANT_VMCPU=10 00:00:41.676 SPDK_VAGRANT_VMRAM=12288 00:00:41.676 SPDK_VAGRANT_PROVIDER=libvirt 00:00:41.676 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:41.676 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:41.676 SPDK_OPENSTACK_NETWORK=0 00:00:41.676 VAGRANT_PACKAGE_BOX=0 00:00:41.676 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:41.676 FORCE_DISTRO=true 00:00:41.676 VAGRANT_BOX_VERSION= 00:00:41.676 EXTRA_VAGRANTFILES= 00:00:41.676 NIC_MODEL=virtio 00:00:41.676 00:00:41.676 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:41.676 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:43.596 Bringing machine 'default' up with 'libvirt' provider... 00:00:44.171 ==> default: Creating image (snapshot of base box volume). 00:00:44.171 ==> default: Creating domain with the following settings... 00:00:44.171 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730590837_d8b8bb3201f22e572c3e 00:00:44.171 ==> default: -- Domain type: kvm 00:00:44.171 ==> default: -- Cpus: 10 00:00:44.171 ==> default: -- Feature: acpi 00:00:44.171 ==> default: -- Feature: apic 00:00:44.171 ==> default: -- Feature: pae 00:00:44.171 ==> default: -- Memory: 12288M 00:00:44.171 ==> default: -- Memory Backing: hugepages: 00:00:44.171 ==> default: -- Management MAC: 00:00:44.171 ==> default: -- Loader: 00:00:44.171 ==> default: -- Nvram: 00:00:44.171 ==> default: -- Base box: spdk/fedora39 00:00:44.171 ==> default: -- Storage pool: default 00:00:44.171 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730590837_d8b8bb3201f22e572c3e.img (20G) 00:00:44.171 ==> default: -- Volume Cache: default 00:00:44.171 ==> default: -- Kernel: 00:00:44.171 ==> default: -- Initrd: 00:00:44.171 ==> default: -- Graphics Type: vnc 00:00:44.171 ==> default: -- Graphics Port: -1 00:00:44.171 ==> default: -- Graphics IP: 127.0.0.1 00:00:44.171 ==> default: -- Graphics Password: Not defined 00:00:44.171 ==> default: -- Video Type: cirrus 00:00:44.171 ==> default: -- Video VRAM: 9216 00:00:44.171 ==> default: -- Sound Type: 00:00:44.171 ==> default: -- Keymap: en-us 00:00:44.171 ==> default: -- TPM Path: 00:00:44.171 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:44.171 ==> default: -- Command line args: 00:00:44.171 ==> default: -> value=-device, 00:00:44.171 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:44.171 ==> default: -> value=-drive, 00:00:44.171 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:44.171 ==> default: -> value=-device, 00:00:44.171 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.171 ==> default: -> value=-device, 00:00:44.171 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:44.171 ==> default: -> value=-drive, 00:00:44.171 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:44.171 ==> default: -> value=-device, 00:00:44.171 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.171 ==> default: -> value=-drive, 00:00:44.171 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:44.171 ==> default: -> value=-device, 00:00:44.171 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.171 ==> default: -> value=-drive, 00:00:44.171 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:44.171 ==> default: -> value=-device, 00:00:44.172 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.432 ==> default: Creating shared folders metadata... 00:00:44.432 ==> default: Starting domain. 00:00:46.341 ==> default: Waiting for domain to get an IP address... 00:01:04.507 ==> default: Waiting for SSH to become available... 00:01:04.507 ==> default: Configuring and enabling network interfaces... 00:01:08.715 default: SSH address: 192.168.121.94:22 00:01:08.715 default: SSH username: vagrant 00:01:08.715 default: SSH auth method: private key 00:01:11.256 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:19.383 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:24.688 ==> default: Mounting SSHFS shared folder... 00:01:27.241 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:27.241 ==> default: Checking Mount.. 00:01:29.152 ==> default: Folder Successfully Mounted! 00:01:29.152 ==> default: Running provisioner: file... 00:01:30.092 default: ~/.gitconfig => .gitconfig 00:01:30.661 00:01:30.661 SUCCESS! 00:01:30.661 00:01:30.661 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:30.661 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:30.661 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:30.661 00:01:30.671 [Pipeline] } 00:01:30.685 [Pipeline] // stage 00:01:30.695 [Pipeline] dir 00:01:30.695 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:30.697 [Pipeline] { 00:01:30.709 [Pipeline] catchError 00:01:30.711 [Pipeline] { 00:01:30.724 [Pipeline] sh 00:01:31.009 + vagrant ssh-config --host vagrant 00:01:31.009 + sed -ne /^Host/,$p 00:01:31.009 + tee ssh_conf 00:01:33.544 Host vagrant 00:01:33.544 HostName 192.168.121.94 00:01:33.544 User vagrant 00:01:33.544 Port 22 00:01:33.544 UserKnownHostsFile /dev/null 00:01:33.544 StrictHostKeyChecking no 00:01:33.544 PasswordAuthentication no 00:01:33.544 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:33.544 IdentitiesOnly yes 00:01:33.544 LogLevel FATAL 00:01:33.544 ForwardAgent yes 00:01:33.544 ForwardX11 yes 00:01:33.544 00:01:33.560 [Pipeline] withEnv 00:01:33.563 [Pipeline] { 00:01:33.577 [Pipeline] sh 00:01:33.859 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:33.859 source /etc/os-release 00:01:33.859 [[ -e /image.version ]] && img=$(< /image.version) 00:01:33.859 # Minimal, systemd-like check. 00:01:33.859 if [[ -e /.dockerenv ]]; then 00:01:33.859 # Clear garbage from the node's name: 00:01:33.859 # agt-er_autotest_547-896 -> autotest_547-896 00:01:33.859 # $HOSTNAME is the actual container id 00:01:33.859 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:33.859 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:33.859 # We can assume this is a mount from a host where container is running, 00:01:33.859 # so fetch its hostname to easily identify the target swarm worker. 00:01:33.859 container="$(< /etc/hostname) ($agent)" 00:01:33.859 else 00:01:33.859 # Fallback 00:01:33.859 container=$agent 00:01:33.859 fi 00:01:33.859 fi 00:01:33.859 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:33.859 00:01:34.133 [Pipeline] } 00:01:34.148 [Pipeline] // withEnv 00:01:34.156 [Pipeline] setCustomBuildProperty 00:01:34.170 [Pipeline] stage 00:01:34.172 [Pipeline] { (Tests) 00:01:34.187 [Pipeline] sh 00:01:34.472 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:34.745 [Pipeline] sh 00:01:35.029 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:35.305 [Pipeline] timeout 00:01:35.306 Timeout set to expire in 1 hr 30 min 00:01:35.307 [Pipeline] { 00:01:35.322 [Pipeline] sh 00:01:35.606 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:36.185 HEAD is now at fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:01:36.205 [Pipeline] sh 00:01:36.485 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:36.759 [Pipeline] sh 00:01:37.043 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:37.321 [Pipeline] sh 00:01:37.604 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:37.877 ++ readlink -f spdk_repo 00:01:37.877 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:37.877 + [[ -n /home/vagrant/spdk_repo ]] 00:01:37.877 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:37.877 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:37.877 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:37.877 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:37.877 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:37.877 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:37.877 + cd /home/vagrant/spdk_repo 00:01:37.877 + source /etc/os-release 00:01:37.877 ++ NAME='Fedora Linux' 00:01:37.877 ++ VERSION='39 (Cloud Edition)' 00:01:37.877 ++ ID=fedora 00:01:37.877 ++ VERSION_ID=39 00:01:37.877 ++ VERSION_CODENAME= 00:01:37.877 ++ PLATFORM_ID=platform:f39 00:01:37.877 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:37.877 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:37.877 ++ LOGO=fedora-logo-icon 00:01:37.877 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:37.877 ++ HOME_URL=https://fedoraproject.org/ 00:01:37.877 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:37.877 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:37.877 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:37.877 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:37.877 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:37.877 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:37.877 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:37.877 ++ SUPPORT_END=2024-11-12 00:01:37.877 ++ VARIANT='Cloud Edition' 00:01:37.877 ++ VARIANT_ID=cloud 00:01:37.877 + uname -a 00:01:37.877 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:37.877 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:38.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:38.446 Hugepages 00:01:38.446 node hugesize free / total 00:01:38.446 node0 1048576kB 0 / 0 00:01:38.446 node0 2048kB 0 / 0 00:01:38.446 00:01:38.446 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.446 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:38.446 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:38.446 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:38.446 + rm -f /tmp/spdk-ld-path 00:01:38.446 + source autorun-spdk.conf 00:01:38.446 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.446 ++ SPDK_RUN_ASAN=1 00:01:38.446 ++ SPDK_RUN_UBSAN=1 00:01:38.446 ++ SPDK_TEST_RAID=1 00:01:38.446 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:38.446 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:38.446 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.446 ++ RUN_NIGHTLY=1 00:01:38.446 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.446 + [[ -n '' ]] 00:01:38.446 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:38.446 + for M in /var/spdk/build-*-manifest.txt 00:01:38.446 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:38.446 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.446 + for M in /var/spdk/build-*-manifest.txt 00:01:38.446 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.446 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.706 + for M in /var/spdk/build-*-manifest.txt 00:01:38.706 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.706 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.706 ++ uname 00:01:38.706 + [[ Linux == \L\i\n\u\x ]] 00:01:38.706 + sudo dmesg -T 00:01:38.706 + sudo dmesg --clear 00:01:38.706 + dmesg_pid=6173 00:01:38.706 + [[ Fedora Linux == FreeBSD ]] 00:01:38.706 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.706 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.706 + sudo dmesg -Tw 00:01:38.706 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.706 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.706 + export FIO_BIN=/usr/src/fio-static/fio 00:01:38.707 + FIO_BIN=/usr/src/fio-static/fio 00:01:38.707 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.707 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.707 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.707 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.707 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.707 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.707 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.707 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.707 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:38.707 23:41:32 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:38.707 23:41:32 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:38.707 23:41:32 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.707 23:41:32 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:38.707 23:41:32 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:38.707 23:41:32 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:38.707 23:41:32 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:38.707 23:41:32 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:38.707 23:41:32 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.707 23:41:32 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:01:38.707 23:41:32 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:38.707 23:41:32 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:38.967 23:41:32 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:38.967 23:41:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:38.967 23:41:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:38.967 23:41:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.967 23:41:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.967 23:41:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.967 23:41:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.967 23:41:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.967 23:41:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.967 23:41:32 -- paths/export.sh@5 -- $ export PATH 00:01:38.967 23:41:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.967 23:41:32 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:38.967 23:41:32 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:38.967 23:41:32 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730590892.XXXXXX 00:01:38.967 23:41:32 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730590892.6r7WJH 00:01:38.967 23:41:32 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:38.967 23:41:32 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:01:38.967 23:41:32 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:38.967 23:41:32 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:38.967 23:41:32 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:38.967 23:41:32 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:38.967 23:41:32 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:38.967 23:41:32 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:38.967 23:41:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.967 23:41:32 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:38.967 23:41:32 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:38.967 23:41:32 -- pm/common@17 -- $ local monitor 00:01:38.967 23:41:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.967 23:41:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.967 23:41:32 -- pm/common@25 -- $ sleep 1 00:01:38.967 23:41:32 -- pm/common@21 -- $ date +%s 00:01:38.967 23:41:32 -- pm/common@21 -- $ date +%s 00:01:38.967 23:41:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730590892 00:01:38.967 23:41:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730590892 00:01:38.967 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730590892_collect-vmstat.pm.log 00:01:38.967 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730590892_collect-cpu-load.pm.log 00:01:39.916 23:41:33 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:39.916 23:41:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:39.916 23:41:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:39.916 23:41:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:39.916 23:41:33 -- spdk/autobuild.sh@16 -- $ date -u 00:01:39.916 Sat Nov 2 11:41:33 PM UTC 2024 00:01:39.916 23:41:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:39.916 v25.01-pre-124-gfa3ab7384 00:01:39.916 23:41:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:39.916 23:41:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:39.917 23:41:33 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:39.917 23:41:33 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:39.917 23:41:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.917 ************************************ 00:01:39.917 START TEST asan 00:01:39.917 ************************************ 00:01:39.917 using asan 00:01:39.917 23:41:33 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:39.917 00:01:39.917 real 0m0.001s 00:01:39.917 user 0m0.000s 00:01:39.917 sys 0m0.000s 00:01:39.917 23:41:33 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:39.917 23:41:33 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.917 ************************************ 00:01:39.917 END TEST asan 00:01:39.917 ************************************ 00:01:39.917 23:41:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:39.917 23:41:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:39.917 23:41:34 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:39.917 23:41:34 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:39.917 23:41:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.184 ************************************ 00:01:40.184 START TEST ubsan 00:01:40.184 ************************************ 00:01:40.184 using ubsan 00:01:40.184 23:41:34 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:40.184 00:01:40.184 real 0m0.000s 00:01:40.184 user 0m0.000s 00:01:40.184 sys 0m0.000s 00:01:40.184 23:41:34 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:40.184 23:41:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.184 ************************************ 00:01:40.184 END TEST ubsan 00:01:40.184 ************************************ 00:01:40.184 23:41:34 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:40.184 23:41:34 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:40.184 23:41:34 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:40.184 23:41:34 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:01:40.184 23:41:34 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:40.184 23:41:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.184 ************************************ 00:01:40.184 START TEST build_native_dpdk 00:01:40.184 ************************************ 00:01:40.184 23:41:34 build_native_dpdk -- common/autotest_common.sh@1127 -- $ _build_native_dpdk 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:40.184 23:41:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:40.185 caf0f5d395 version: 22.11.4 00:01:40.185 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:40.185 dc9c799c7d vhost: fix missing spinlock unlock 00:01:40.185 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:40.185 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:40.185 patching file config/rte_config.h 00:01:40.185 Hunk #1 succeeded at 60 (offset 1 line). 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:40.185 patching file lib/pcapng/rte_pcapng.c 00:01:40.185 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:40.185 23:41:34 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:40.185 23:41:34 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:46.771 The Meson build system 00:01:46.771 Version: 1.5.0 00:01:46.771 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:46.772 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:46.772 Build type: native build 00:01:46.772 Program cat found: YES (/usr/bin/cat) 00:01:46.772 Project name: DPDK 00:01:46.772 Project version: 22.11.4 00:01:46.772 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:46.772 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:46.772 Host machine cpu family: x86_64 00:01:46.772 Host machine cpu: x86_64 00:01:46.772 Message: ## Building in Developer Mode ## 00:01:46.772 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:46.772 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:46.772 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:46.772 Program objdump found: YES (/usr/bin/objdump) 00:01:46.772 Program python3 found: YES (/usr/bin/python3) 00:01:46.772 Program cat found: YES (/usr/bin/cat) 00:01:46.772 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:46.772 Checking for size of "void *" : 8 00:01:46.772 Checking for size of "void *" : 8 (cached) 00:01:46.772 Library m found: YES 00:01:46.772 Library numa found: YES 00:01:46.772 Has header "numaif.h" : YES 00:01:46.772 Library fdt found: NO 00:01:46.772 Library execinfo found: NO 00:01:46.772 Has header "execinfo.h" : YES 00:01:46.772 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:46.772 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:46.772 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:46.772 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:46.772 Run-time dependency openssl found: YES 3.1.1 00:01:46.772 Run-time dependency libpcap found: YES 1.10.4 00:01:46.772 Has header "pcap.h" with dependency libpcap: YES 00:01:46.772 Compiler for C supports arguments -Wcast-qual: YES 00:01:46.772 Compiler for C supports arguments -Wdeprecated: YES 00:01:46.772 Compiler for C supports arguments -Wformat: YES 00:01:46.772 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:46.772 Compiler for C supports arguments -Wformat-security: NO 00:01:46.772 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.772 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:46.772 Compiler for C supports arguments -Wnested-externs: YES 00:01:46.772 Compiler for C supports arguments -Wold-style-definition: YES 00:01:46.772 Compiler for C supports arguments -Wpointer-arith: YES 00:01:46.772 Compiler for C supports arguments -Wsign-compare: YES 00:01:46.772 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:46.772 Compiler for C supports arguments -Wundef: YES 00:01:46.772 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.772 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:46.772 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:46.772 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.772 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:46.772 Compiler for C supports arguments -mavx512f: YES 00:01:46.772 Checking if "AVX512 checking" compiles: YES 00:01:46.772 Fetching value of define "__SSE4_2__" : 1 00:01:46.772 Fetching value of define "__AES__" : 1 00:01:46.772 Fetching value of define "__AVX__" : 1 00:01:46.772 Fetching value of define "__AVX2__" : 1 00:01:46.772 Fetching value of define "__AVX512BW__" : 1 00:01:46.772 Fetching value of define "__AVX512CD__" : 1 00:01:46.772 Fetching value of define "__AVX512DQ__" : 1 00:01:46.772 Fetching value of define "__AVX512F__" : 1 00:01:46.772 Fetching value of define "__AVX512VL__" : 1 00:01:46.772 Fetching value of define "__PCLMUL__" : 1 00:01:46.772 Fetching value of define "__RDRND__" : 1 00:01:46.772 Fetching value of define "__RDSEED__" : 1 00:01:46.772 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:46.772 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:46.772 Message: lib/kvargs: Defining dependency "kvargs" 00:01:46.772 Message: lib/telemetry: Defining dependency "telemetry" 00:01:46.772 Checking for function "getentropy" : YES 00:01:46.772 Message: lib/eal: Defining dependency "eal" 00:01:46.772 Message: lib/ring: Defining dependency "ring" 00:01:46.772 Message: lib/rcu: Defining dependency "rcu" 00:01:46.772 Message: lib/mempool: Defining dependency "mempool" 00:01:46.772 Message: lib/mbuf: Defining dependency "mbuf" 00:01:46.772 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:46.772 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:46.772 Compiler for C supports arguments -mpclmul: YES 00:01:46.772 Compiler for C supports arguments -maes: YES 00:01:46.772 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.772 Compiler for C supports arguments -mavx512bw: YES 00:01:46.772 Compiler for C supports arguments -mavx512dq: YES 00:01:46.772 Compiler for C supports arguments -mavx512vl: YES 00:01:46.772 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:46.772 Compiler for C supports arguments -mavx2: YES 00:01:46.772 Compiler for C supports arguments -mavx: YES 00:01:46.772 Message: lib/net: Defining dependency "net" 00:01:46.772 Message: lib/meter: Defining dependency "meter" 00:01:46.772 Message: lib/ethdev: Defining dependency "ethdev" 00:01:46.772 Message: lib/pci: Defining dependency "pci" 00:01:46.772 Message: lib/cmdline: Defining dependency "cmdline" 00:01:46.772 Message: lib/metrics: Defining dependency "metrics" 00:01:46.772 Message: lib/hash: Defining dependency "hash" 00:01:46.772 Message: lib/timer: Defining dependency "timer" 00:01:46.772 Fetching value of define "__AVX2__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.772 Message: lib/acl: Defining dependency "acl" 00:01:46.772 Message: lib/bbdev: Defining dependency "bbdev" 00:01:46.772 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:46.772 Run-time dependency libelf found: YES 0.191 00:01:46.772 Message: lib/bpf: Defining dependency "bpf" 00:01:46.772 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:46.772 Message: lib/compressdev: Defining dependency "compressdev" 00:01:46.772 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:46.772 Message: lib/distributor: Defining dependency "distributor" 00:01:46.772 Message: lib/efd: Defining dependency "efd" 00:01:46.772 Message: lib/eventdev: Defining dependency "eventdev" 00:01:46.772 Message: lib/gpudev: Defining dependency "gpudev" 00:01:46.772 Message: lib/gro: Defining dependency "gro" 00:01:46.772 Message: lib/gso: Defining dependency "gso" 00:01:46.772 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:46.772 Message: lib/jobstats: Defining dependency "jobstats" 00:01:46.772 Message: lib/latencystats: Defining dependency "latencystats" 00:01:46.772 Message: lib/lpm: Defining dependency "lpm" 00:01:46.772 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:46.772 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:46.772 Message: lib/member: Defining dependency "member" 00:01:46.772 Message: lib/pcapng: Defining dependency "pcapng" 00:01:46.772 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:46.772 Message: lib/power: Defining dependency "power" 00:01:46.772 Message: lib/rawdev: Defining dependency "rawdev" 00:01:46.772 Message: lib/regexdev: Defining dependency "regexdev" 00:01:46.772 Message: lib/dmadev: Defining dependency "dmadev" 00:01:46.772 Message: lib/rib: Defining dependency "rib" 00:01:46.772 Message: lib/reorder: Defining dependency "reorder" 00:01:46.772 Message: lib/sched: Defining dependency "sched" 00:01:46.772 Message: lib/security: Defining dependency "security" 00:01:46.772 Message: lib/stack: Defining dependency "stack" 00:01:46.772 Has header "linux/userfaultfd.h" : YES 00:01:46.772 Message: lib/vhost: Defining dependency "vhost" 00:01:46.772 Message: lib/ipsec: Defining dependency "ipsec" 00:01:46.772 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.772 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.772 Message: lib/fib: Defining dependency "fib" 00:01:46.772 Message: lib/port: Defining dependency "port" 00:01:46.772 Message: lib/pdump: Defining dependency "pdump" 00:01:46.772 Message: lib/table: Defining dependency "table" 00:01:46.772 Message: lib/pipeline: Defining dependency "pipeline" 00:01:46.772 Message: lib/graph: Defining dependency "graph" 00:01:46.772 Message: lib/node: Defining dependency "node" 00:01:46.772 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:46.772 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.772 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.772 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.772 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:46.772 Compiler for C supports arguments -Wno-unused-value: YES 00:01:46.772 Compiler for C supports arguments -Wno-format: YES 00:01:46.772 Compiler for C supports arguments -Wno-format-security: YES 00:01:46.772 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:46.772 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:47.036 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:47.036 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:47.036 Fetching value of define "__AVX2__" : 1 (cached) 00:01:47.036 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.036 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.036 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.036 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:47.036 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:47.036 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:47.036 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:47.036 Configuring doxy-api.conf using configuration 00:01:47.036 Program sphinx-build found: NO 00:01:47.036 Configuring rte_build_config.h using configuration 00:01:47.036 Message: 00:01:47.036 ================= 00:01:47.036 Applications Enabled 00:01:47.036 ================= 00:01:47.036 00:01:47.036 apps: 00:01:47.036 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:47.036 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:47.036 test-security-perf, 00:01:47.036 00:01:47.036 Message: 00:01:47.036 ================= 00:01:47.036 Libraries Enabled 00:01:47.036 ================= 00:01:47.036 00:01:47.036 libs: 00:01:47.036 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:47.036 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:47.036 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:47.036 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:47.036 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:47.036 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:47.036 table, pipeline, graph, node, 00:01:47.036 00:01:47.036 Message: 00:01:47.036 =============== 00:01:47.036 Drivers Enabled 00:01:47.036 =============== 00:01:47.036 00:01:47.036 common: 00:01:47.036 00:01:47.036 bus: 00:01:47.036 pci, vdev, 00:01:47.036 mempool: 00:01:47.036 ring, 00:01:47.036 dma: 00:01:47.036 00:01:47.036 net: 00:01:47.036 i40e, 00:01:47.036 raw: 00:01:47.036 00:01:47.036 crypto: 00:01:47.036 00:01:47.036 compress: 00:01:47.036 00:01:47.036 regex: 00:01:47.036 00:01:47.036 vdpa: 00:01:47.036 00:01:47.036 event: 00:01:47.036 00:01:47.036 baseband: 00:01:47.036 00:01:47.036 gpu: 00:01:47.036 00:01:47.036 00:01:47.036 Message: 00:01:47.036 ================= 00:01:47.036 Content Skipped 00:01:47.036 ================= 00:01:47.036 00:01:47.036 apps: 00:01:47.036 00:01:47.036 libs: 00:01:47.036 kni: explicitly disabled via build config (deprecated lib) 00:01:47.036 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:47.036 00:01:47.036 drivers: 00:01:47.036 common/cpt: not in enabled drivers build config 00:01:47.036 common/dpaax: not in enabled drivers build config 00:01:47.036 common/iavf: not in enabled drivers build config 00:01:47.036 common/idpf: not in enabled drivers build config 00:01:47.036 common/mvep: not in enabled drivers build config 00:01:47.036 common/octeontx: not in enabled drivers build config 00:01:47.036 bus/auxiliary: not in enabled drivers build config 00:01:47.036 bus/dpaa: not in enabled drivers build config 00:01:47.036 bus/fslmc: not in enabled drivers build config 00:01:47.036 bus/ifpga: not in enabled drivers build config 00:01:47.036 bus/vmbus: not in enabled drivers build config 00:01:47.036 common/cnxk: not in enabled drivers build config 00:01:47.036 common/mlx5: not in enabled drivers build config 00:01:47.036 common/qat: not in enabled drivers build config 00:01:47.036 common/sfc_efx: not in enabled drivers build config 00:01:47.036 mempool/bucket: not in enabled drivers build config 00:01:47.036 mempool/cnxk: not in enabled drivers build config 00:01:47.036 mempool/dpaa: not in enabled drivers build config 00:01:47.036 mempool/dpaa2: not in enabled drivers build config 00:01:47.036 mempool/octeontx: not in enabled drivers build config 00:01:47.036 mempool/stack: not in enabled drivers build config 00:01:47.036 dma/cnxk: not in enabled drivers build config 00:01:47.036 dma/dpaa: not in enabled drivers build config 00:01:47.036 dma/dpaa2: not in enabled drivers build config 00:01:47.036 dma/hisilicon: not in enabled drivers build config 00:01:47.036 dma/idxd: not in enabled drivers build config 00:01:47.036 dma/ioat: not in enabled drivers build config 00:01:47.036 dma/skeleton: not in enabled drivers build config 00:01:47.036 net/af_packet: not in enabled drivers build config 00:01:47.036 net/af_xdp: not in enabled drivers build config 00:01:47.036 net/ark: not in enabled drivers build config 00:01:47.036 net/atlantic: not in enabled drivers build config 00:01:47.036 net/avp: not in enabled drivers build config 00:01:47.036 net/axgbe: not in enabled drivers build config 00:01:47.036 net/bnx2x: not in enabled drivers build config 00:01:47.036 net/bnxt: not in enabled drivers build config 00:01:47.036 net/bonding: not in enabled drivers build config 00:01:47.036 net/cnxk: not in enabled drivers build config 00:01:47.036 net/cxgbe: not in enabled drivers build config 00:01:47.036 net/dpaa: not in enabled drivers build config 00:01:47.036 net/dpaa2: not in enabled drivers build config 00:01:47.036 net/e1000: not in enabled drivers build config 00:01:47.036 net/ena: not in enabled drivers build config 00:01:47.036 net/enetc: not in enabled drivers build config 00:01:47.036 net/enetfec: not in enabled drivers build config 00:01:47.036 net/enic: not in enabled drivers build config 00:01:47.036 net/failsafe: not in enabled drivers build config 00:01:47.036 net/fm10k: not in enabled drivers build config 00:01:47.036 net/gve: not in enabled drivers build config 00:01:47.036 net/hinic: not in enabled drivers build config 00:01:47.036 net/hns3: not in enabled drivers build config 00:01:47.036 net/iavf: not in enabled drivers build config 00:01:47.036 net/ice: not in enabled drivers build config 00:01:47.036 net/idpf: not in enabled drivers build config 00:01:47.036 net/igc: not in enabled drivers build config 00:01:47.036 net/ionic: not in enabled drivers build config 00:01:47.036 net/ipn3ke: not in enabled drivers build config 00:01:47.036 net/ixgbe: not in enabled drivers build config 00:01:47.036 net/kni: not in enabled drivers build config 00:01:47.036 net/liquidio: not in enabled drivers build config 00:01:47.036 net/mana: not in enabled drivers build config 00:01:47.036 net/memif: not in enabled drivers build config 00:01:47.036 net/mlx4: not in enabled drivers build config 00:01:47.036 net/mlx5: not in enabled drivers build config 00:01:47.036 net/mvneta: not in enabled drivers build config 00:01:47.036 net/mvpp2: not in enabled drivers build config 00:01:47.036 net/netvsc: not in enabled drivers build config 00:01:47.036 net/nfb: not in enabled drivers build config 00:01:47.036 net/nfp: not in enabled drivers build config 00:01:47.036 net/ngbe: not in enabled drivers build config 00:01:47.036 net/null: not in enabled drivers build config 00:01:47.036 net/octeontx: not in enabled drivers build config 00:01:47.036 net/octeon_ep: not in enabled drivers build config 00:01:47.036 net/pcap: not in enabled drivers build config 00:01:47.036 net/pfe: not in enabled drivers build config 00:01:47.036 net/qede: not in enabled drivers build config 00:01:47.036 net/ring: not in enabled drivers build config 00:01:47.036 net/sfc: not in enabled drivers build config 00:01:47.036 net/softnic: not in enabled drivers build config 00:01:47.036 net/tap: not in enabled drivers build config 00:01:47.036 net/thunderx: not in enabled drivers build config 00:01:47.036 net/txgbe: not in enabled drivers build config 00:01:47.036 net/vdev_netvsc: not in enabled drivers build config 00:01:47.036 net/vhost: not in enabled drivers build config 00:01:47.036 net/virtio: not in enabled drivers build config 00:01:47.036 net/vmxnet3: not in enabled drivers build config 00:01:47.036 raw/cnxk_bphy: not in enabled drivers build config 00:01:47.036 raw/cnxk_gpio: not in enabled drivers build config 00:01:47.036 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:47.036 raw/ifpga: not in enabled drivers build config 00:01:47.036 raw/ntb: not in enabled drivers build config 00:01:47.036 raw/skeleton: not in enabled drivers build config 00:01:47.036 crypto/armv8: not in enabled drivers build config 00:01:47.036 crypto/bcmfs: not in enabled drivers build config 00:01:47.036 crypto/caam_jr: not in enabled drivers build config 00:01:47.036 crypto/ccp: not in enabled drivers build config 00:01:47.036 crypto/cnxk: not in enabled drivers build config 00:01:47.036 crypto/dpaa_sec: not in enabled drivers build config 00:01:47.036 crypto/dpaa2_sec: not in enabled drivers build config 00:01:47.036 crypto/ipsec_mb: not in enabled drivers build config 00:01:47.036 crypto/mlx5: not in enabled drivers build config 00:01:47.036 crypto/mvsam: not in enabled drivers build config 00:01:47.036 crypto/nitrox: not in enabled drivers build config 00:01:47.036 crypto/null: not in enabled drivers build config 00:01:47.036 crypto/octeontx: not in enabled drivers build config 00:01:47.036 crypto/openssl: not in enabled drivers build config 00:01:47.036 crypto/scheduler: not in enabled drivers build config 00:01:47.036 crypto/uadk: not in enabled drivers build config 00:01:47.036 crypto/virtio: not in enabled drivers build config 00:01:47.036 compress/isal: not in enabled drivers build config 00:01:47.036 compress/mlx5: not in enabled drivers build config 00:01:47.036 compress/octeontx: not in enabled drivers build config 00:01:47.036 compress/zlib: not in enabled drivers build config 00:01:47.036 regex/mlx5: not in enabled drivers build config 00:01:47.036 regex/cn9k: not in enabled drivers build config 00:01:47.036 vdpa/ifc: not in enabled drivers build config 00:01:47.036 vdpa/mlx5: not in enabled drivers build config 00:01:47.037 vdpa/sfc: not in enabled drivers build config 00:01:47.037 event/cnxk: not in enabled drivers build config 00:01:47.037 event/dlb2: not in enabled drivers build config 00:01:47.037 event/dpaa: not in enabled drivers build config 00:01:47.037 event/dpaa2: not in enabled drivers build config 00:01:47.037 event/dsw: not in enabled drivers build config 00:01:47.037 event/opdl: not in enabled drivers build config 00:01:47.037 event/skeleton: not in enabled drivers build config 00:01:47.037 event/sw: not in enabled drivers build config 00:01:47.037 event/octeontx: not in enabled drivers build config 00:01:47.037 baseband/acc: not in enabled drivers build config 00:01:47.037 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:47.037 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:47.037 baseband/la12xx: not in enabled drivers build config 00:01:47.037 baseband/null: not in enabled drivers build config 00:01:47.037 baseband/turbo_sw: not in enabled drivers build config 00:01:47.037 gpu/cuda: not in enabled drivers build config 00:01:47.037 00:01:47.037 00:01:47.037 Build targets in project: 311 00:01:47.037 00:01:47.037 DPDK 22.11.4 00:01:47.037 00:01:47.037 User defined options 00:01:47.037 libdir : lib 00:01:47.037 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:47.037 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:47.037 c_link_args : 00:01:47.037 enable_docs : false 00:01:47.037 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:47.037 enable_kmods : false 00:01:47.037 machine : native 00:01:47.037 tests : false 00:01:47.037 00:01:47.037 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.037 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:47.295 23:41:41 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:47.295 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:47.295 [1/740] Generating lib/rte_telemetry_def with a custom command 00:01:47.295 [2/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:47.295 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:47.295 [4/740] Generating lib/rte_kvargs_def with a custom command 00:01:47.295 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.553 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.553 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.553 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.553 [9/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.553 [10/740] Linking static target lib/librte_kvargs.a 00:01:47.553 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.553 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.553 [13/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.553 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.553 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.553 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.553 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.553 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.553 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.553 [20/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.553 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.553 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:47.813 [23/740] Linking target lib/librte_kvargs.so.23.0 00:01:47.813 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.813 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.813 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.813 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.813 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.813 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.813 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.813 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:47.813 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.813 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.813 [34/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.813 [35/740] Linking static target lib/librte_telemetry.a 00:01:47.813 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.072 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.072 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.072 [39/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:48.072 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.072 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.072 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.072 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.072 [44/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.072 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.332 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.332 [47/740] Linking target lib/librte_telemetry.so.23.0 00:01:48.332 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.332 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.332 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.332 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.332 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.332 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.332 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.332 [55/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:48.332 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.332 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.332 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.332 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.332 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.332 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.332 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.332 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.332 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.332 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.332 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:48.592 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.592 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.592 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.592 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.592 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.592 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.592 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.592 [74/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.592 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.592 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.592 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.592 [78/740] Generating lib/rte_eal_def with a custom command 00:01:48.592 [79/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.592 [80/740] Generating lib/rte_eal_mingw with a custom command 00:01:48.592 [81/740] Generating lib/rte_ring_def with a custom command 00:01:48.592 [82/740] Generating lib/rte_ring_mingw with a custom command 00:01:48.592 [83/740] Generating lib/rte_rcu_def with a custom command 00:01:48.592 [84/740] Generating lib/rte_rcu_mingw with a custom command 00:01:48.592 [85/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.592 [86/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.851 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.851 [88/740] Linking static target lib/librte_ring.a 00:01:48.851 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.851 [90/740] Generating lib/rte_mempool_def with a custom command 00:01:48.851 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:01:48.851 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.851 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.851 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.110 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:49.110 [96/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:49.110 [97/740] Generating lib/rte_mbuf_def with a custom command 00:01:49.110 [98/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:49.110 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:49.110 [100/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:49.110 [101/740] Linking static target lib/librte_eal.a 00:01:49.370 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:49.370 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:49.370 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:49.370 [105/740] Linking static target lib/librte_rcu.a 00:01:49.370 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:49.370 [107/740] Linking static target lib/librte_mempool.a 00:01:49.370 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.370 [109/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.629 [110/740] Generating lib/rte_net_def with a custom command 00:01:49.629 [111/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:49.629 [112/740] Generating lib/rte_net_mingw with a custom command 00:01:49.629 [113/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.629 [114/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:49.629 [115/740] Generating lib/rte_meter_def with a custom command 00:01:49.629 [116/740] Generating lib/rte_meter_mingw with a custom command 00:01:49.629 [117/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:49.629 [118/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.629 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.629 [120/740] Linking static target lib/librte_meter.a 00:01:49.629 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.889 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.889 [123/740] Linking static target lib/librte_net.a 00:01:49.889 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.889 [125/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:49.889 [126/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.889 [127/740] Linking static target lib/librte_mbuf.a 00:01:49.889 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:50.148 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:50.148 [130/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.148 [131/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.148 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:50.148 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:50.407 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:50.407 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.407 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:50.407 [137/740] Generating lib/rte_ethdev_def with a custom command 00:01:50.666 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:50.666 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:50.666 [140/740] Generating lib/rte_pci_def with a custom command 00:01:50.666 [141/740] Generating lib/rte_pci_mingw with a custom command 00:01:50.666 [142/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.666 [143/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.666 [144/740] Linking static target lib/librte_pci.a 00:01:50.666 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:50.666 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:50.666 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.666 [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.925 [149/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.925 [150/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.925 [151/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:50.925 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.925 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:50.925 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.925 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.925 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.925 [157/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.925 [158/740] Generating lib/rte_cmdline_def with a custom command 00:01:50.925 [159/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:50.925 [160/740] Generating lib/rte_metrics_def with a custom command 00:01:50.925 [161/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:50.925 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:01:50.925 [163/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.186 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.186 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.186 [166/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:51.186 [167/740] Linking static target lib/librte_cmdline.a 00:01:51.186 [168/740] Generating lib/rte_hash_def with a custom command 00:01:51.186 [169/740] Generating lib/rte_hash_mingw with a custom command 00:01:51.186 [170/740] Generating lib/rte_timer_def with a custom command 00:01:51.186 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.186 [172/740] Generating lib/rte_timer_mingw with a custom command 00:01:51.186 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.444 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:51.444 [175/740] Linking static target lib/librte_metrics.a 00:01:51.444 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.444 [177/740] Linking static target lib/librte_timer.a 00:01:51.704 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.704 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:51.704 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:51.704 [181/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.704 [182/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.964 [183/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:51.964 [184/740] Generating lib/rte_acl_def with a custom command 00:01:51.964 [185/740] Generating lib/rte_acl_mingw with a custom command 00:01:51.964 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:51.964 [187/740] Generating lib/rte_bbdev_def with a custom command 00:01:52.224 [188/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:52.224 [189/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:52.224 [190/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.224 [191/740] Generating lib/rte_bitratestats_def with a custom command 00:01:52.224 [192/740] Linking static target lib/librte_ethdev.a 00:01:52.224 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:52.483 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:52.483 [195/740] Linking static target lib/librte_bitratestats.a 00:01:52.483 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:52.483 [197/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:52.483 [198/740] Linking static target lib/librte_bbdev.a 00:01:52.483 [199/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:52.742 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.742 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:53.001 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:53.001 [203/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.260 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:53.260 [205/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.260 [206/740] Linking static target lib/librte_hash.a 00:01:53.520 [207/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:53.520 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:53.520 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:53.779 [210/740] Generating lib/rte_bpf_def with a custom command 00:01:53.779 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:01:53.779 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:53.779 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:01:53.779 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:54.038 [215/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:54.038 [216/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:54.038 [217/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.038 [218/740] Linking static target lib/librte_cfgfile.a 00:01:54.038 [219/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:54.038 [220/740] Generating lib/rte_compressdev_def with a custom command 00:01:54.038 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:54.038 [222/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:54.297 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:54.297 [224/740] Linking static target lib/librte_bpf.a 00:01:54.297 [225/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.297 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.297 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:01:54.297 [228/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:54.297 [229/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.297 [230/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:54.297 [231/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.297 [232/740] Linking static target lib/librte_acl.a 00:01:54.554 [233/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:54.554 [234/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.554 [235/740] Linking static target lib/librte_compressdev.a 00:01:54.554 [236/740] Generating lib/rte_distributor_def with a custom command 00:01:54.554 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:01:54.554 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.554 [239/740] Generating lib/rte_efd_def with a custom command 00:01:54.554 [240/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.554 [241/740] Generating lib/rte_efd_mingw with a custom command 00:01:54.811 [242/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.811 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:54.811 [244/740] Linking target lib/librte_eal.so.23.0 00:01:54.811 [245/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:54.811 [246/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:55.070 [247/740] Linking target lib/librte_ring.so.23.0 00:01:55.070 [248/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:55.070 [249/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:55.070 [250/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:55.070 [251/740] Linking target lib/librte_meter.so.23.0 00:01:55.070 [252/740] Linking target lib/librte_rcu.so.23.0 00:01:55.070 [253/740] Linking target lib/librte_mempool.so.23.0 00:01:55.070 [254/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:55.070 [255/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.070 [256/740] Linking target lib/librte_pci.so.23.0 00:01:55.070 [257/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:55.070 [258/740] Linking target lib/librte_timer.so.23.0 00:01:55.070 [259/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:55.328 [260/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:55.328 [261/740] Linking target lib/librte_acl.so.23.0 00:01:55.328 [262/740] Linking target lib/librte_cfgfile.so.23.0 00:01:55.329 [263/740] Linking static target lib/librte_distributor.a 00:01:55.329 [264/740] Linking target lib/librte_mbuf.so.23.0 00:01:55.329 [265/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:55.329 [266/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:55.329 [267/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:55.329 [268/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:55.329 [269/740] Linking target lib/librte_net.so.23.0 00:01:55.329 [270/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.329 [271/740] Linking target lib/librte_bbdev.so.23.0 00:01:55.587 [272/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:55.587 [273/740] Linking target lib/librte_compressdev.so.23.0 00:01:55.587 [274/740] Linking target lib/librte_cmdline.so.23.0 00:01:55.587 [275/740] Linking target lib/librte_hash.so.23.0 00:01:55.587 [276/740] Linking target lib/librte_distributor.so.23.0 00:01:55.587 [277/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:55.587 [278/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:55.587 [279/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:55.587 [280/740] Linking static target lib/librte_efd.a 00:01:55.587 [281/740] Generating lib/rte_eventdev_def with a custom command 00:01:55.587 [282/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:55.587 [283/740] Generating lib/rte_gpudev_def with a custom command 00:01:55.588 [284/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:55.846 [285/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.846 [286/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:55.846 [287/740] Linking target lib/librte_efd.so.23.0 00:01:56.104 [288/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.104 [289/740] Linking target lib/librte_ethdev.so.23.0 00:01:56.104 [290/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:56.104 [291/740] Linking static target lib/librte_cryptodev.a 00:01:56.104 [292/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:56.104 [293/740] Linking target lib/librte_metrics.so.23.0 00:01:56.363 [294/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:56.363 [295/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:56.363 [296/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:56.363 [297/740] Linking target lib/librte_bpf.so.23.0 00:01:56.363 [298/740] Linking static target lib/librte_gpudev.a 00:01:56.363 [299/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:56.363 [300/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:56.363 [301/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:56.363 [302/740] Linking target lib/librte_bitratestats.so.23.0 00:01:56.363 [303/740] Generating lib/rte_gro_def with a custom command 00:01:56.363 [304/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:56.363 [305/740] Generating lib/rte_gro_mingw with a custom command 00:01:56.363 [306/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:56.621 [307/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:56.621 [308/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:56.621 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:56.880 [310/740] Generating lib/rte_gso_def with a custom command 00:01:56.880 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:56.880 [312/740] Generating lib/rte_gso_mingw with a custom command 00:01:56.880 [313/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:56.880 [314/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:56.880 [315/740] Linking static target lib/librte_gro.a 00:01:56.880 [316/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.880 [317/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:56.880 [318/740] Linking target lib/librte_gpudev.so.23.0 00:01:57.139 [319/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:57.139 [320/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:57.139 [321/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.139 [322/740] Linking static target lib/librte_eventdev.a 00:01:57.139 [323/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:57.139 [324/740] Linking target lib/librte_gro.so.23.0 00:01:57.139 [325/740] Linking static target lib/librte_gso.a 00:01:57.139 [326/740] Generating lib/rte_ip_frag_def with a custom command 00:01:57.139 [327/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:57.139 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:57.139 [329/740] Generating lib/rte_jobstats_def with a custom command 00:01:57.139 [330/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:57.139 [331/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.397 [332/740] Linking target lib/librte_gso.so.23.0 00:01:57.397 [333/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:57.397 [334/740] Linking static target lib/librte_jobstats.a 00:01:57.397 [335/740] Generating lib/rte_latencystats_def with a custom command 00:01:57.397 [336/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:57.397 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:57.397 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:57.397 [339/740] Generating lib/rte_lpm_def with a custom command 00:01:57.397 [340/740] Generating lib/rte_lpm_mingw with a custom command 00:01:57.397 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:57.397 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:57.655 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.655 [344/740] Linking target lib/librte_jobstats.so.23.0 00:01:57.655 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:57.655 [346/740] Linking static target lib/librte_ip_frag.a 00:01:57.655 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:57.655 [348/740] Linking static target lib/librte_latencystats.a 00:01:57.913 [349/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.913 [350/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.913 [351/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:57.913 [352/740] Linking target lib/librte_ip_frag.so.23.0 00:01:57.913 [353/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:57.913 [354/740] Linking target lib/librte_cryptodev.so.23.0 00:01:57.913 [355/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:57.913 [356/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:57.913 [357/740] Generating lib/rte_member_def with a custom command 00:01:57.913 [358/740] Generating lib/rte_member_mingw with a custom command 00:01:57.913 [359/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.913 [360/740] Generating lib/rte_pcapng_def with a custom command 00:01:57.913 [361/740] Linking target lib/librte_latencystats.so.23.0 00:01:57.913 [362/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:57.913 [363/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:57.913 [364/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:57.913 [365/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.172 [366/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.172 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.172 [368/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.172 [369/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:58.172 [370/740] Linking static target lib/librte_lpm.a 00:01:58.430 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:58.430 [372/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:58.430 [373/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:58.430 [374/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.430 [375/740] Generating lib/rte_power_def with a custom command 00:01:58.430 [376/740] Generating lib/rte_power_mingw with a custom command 00:01:58.430 [377/740] Generating lib/rte_rawdev_def with a custom command 00:01:58.430 [378/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.430 [379/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:58.430 [380/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.430 [381/740] Generating lib/rte_regexdev_def with a custom command 00:01:58.430 [382/740] Linking target lib/librte_lpm.so.23.0 00:01:58.690 [383/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:58.690 [384/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.690 [385/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:58.690 [386/740] Linking static target lib/librte_pcapng.a 00:01:58.690 [387/740] Linking target lib/librte_eventdev.so.23.0 00:01:58.690 [388/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.690 [389/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:58.690 [390/740] Generating lib/rte_dmadev_def with a custom command 00:01:58.690 [391/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:58.690 [392/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:58.690 [393/740] Generating lib/rte_rib_def with a custom command 00:01:58.690 [394/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:58.690 [395/740] Linking static target lib/librte_rawdev.a 00:01:58.690 [396/740] Generating lib/rte_rib_mingw with a custom command 00:01:58.690 [397/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:58.690 [398/740] Generating lib/rte_reorder_def with a custom command 00:01:58.690 [399/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.950 [400/740] Generating lib/rte_reorder_mingw with a custom command 00:01:58.950 [401/740] Linking target lib/librte_pcapng.so.23.0 00:01:58.950 [402/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:58.950 [403/740] Linking static target lib/librte_dmadev.a 00:01:58.950 [404/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.950 [405/740] Linking static target lib/librte_power.a 00:01:58.950 [406/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:58.950 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:58.950 [408/740] Linking static target lib/librte_regexdev.a 00:01:58.950 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:59.210 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.210 [411/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:59.210 [412/740] Linking target lib/librte_rawdev.so.23.0 00:01:59.210 [413/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:59.210 [414/740] Generating lib/rte_sched_def with a custom command 00:01:59.210 [415/740] Generating lib/rte_sched_mingw with a custom command 00:01:59.210 [416/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:59.210 [417/740] Generating lib/rte_security_def with a custom command 00:01:59.210 [418/740] Generating lib/rte_security_mingw with a custom command 00:01:59.210 [419/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.210 [420/740] Linking static target lib/librte_reorder.a 00:01:59.210 [421/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:59.210 [422/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.210 [423/740] Linking static target lib/librte_member.a 00:01:59.210 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:59.210 [425/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:59.210 [426/740] Linking target lib/librte_dmadev.so.23.0 00:01:59.470 [427/740] Generating lib/rte_stack_def with a custom command 00:01:59.470 [428/740] Generating lib/rte_stack_mingw with a custom command 00:01:59.470 [429/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:59.470 [430/740] Linking static target lib/librte_stack.a 00:01:59.470 [431/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:59.470 [432/740] Linking static target lib/librte_rib.a 00:01:59.470 [433/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:59.470 [434/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.470 [435/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:59.470 [436/740] Linking target lib/librte_reorder.so.23.0 00:01:59.470 [437/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.470 [438/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.470 [439/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.729 [440/740] Linking target lib/librte_regexdev.so.23.0 00:01:59.729 [441/740] Linking target lib/librte_stack.so.23.0 00:01:59.729 [442/740] Linking target lib/librte_member.so.23.0 00:01:59.729 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.729 [444/740] Linking target lib/librte_power.so.23.0 00:01:59.729 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.729 [446/740] Linking static target lib/librte_security.a 00:01:59.729 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.729 [448/740] Linking target lib/librte_rib.so.23.0 00:01:59.989 [449/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.989 [450/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:59.989 [451/740] Generating lib/rte_vhost_def with a custom command 00:01:59.989 [452/740] Generating lib/rte_vhost_mingw with a custom command 00:01:59.989 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:59.989 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.989 [455/740] Linking target lib/librte_security.so.23.0 00:02:00.249 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:00.249 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:00.249 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:00.249 [459/740] Linking static target lib/librte_sched.a 00:02:00.508 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:00.508 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:00.508 [462/740] Generating lib/rte_ipsec_def with a custom command 00:02:00.508 [463/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:00.508 [464/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.766 [465/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:00.766 [466/740] Linking target lib/librte_sched.so.23.0 00:02:00.766 [467/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:00.766 [468/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:00.766 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:00.766 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:00.766 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:00.766 [472/740] Generating lib/rte_fib_def with a custom command 00:02:01.028 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:01.028 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:01.286 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:01.286 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:01.286 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:01.286 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:01.286 [479/740] Linking static target lib/librte_ipsec.a 00:02:01.546 [480/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:01.547 [481/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:01.547 [482/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:01.547 [483/740] Linking static target lib/librte_fib.a 00:02:01.547 [484/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.547 [485/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:01.813 [486/740] Linking target lib/librte_ipsec.so.23.0 00:02:01.813 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:01.813 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:01.814 [489/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.814 [490/740] Linking target lib/librte_fib.so.23.0 00:02:01.814 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:02.385 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:02.385 [493/740] Generating lib/rte_port_def with a custom command 00:02:02.385 [494/740] Generating lib/rte_port_mingw with a custom command 00:02:02.385 [495/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:02.385 [496/740] Generating lib/rte_pdump_def with a custom command 00:02:02.385 [497/740] Generating lib/rte_pdump_mingw with a custom command 00:02:02.385 [498/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:02.385 [499/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:02.385 [500/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:02.646 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:02.646 [502/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:02.646 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:02.646 [504/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:02.646 [505/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:02.905 [506/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:02.905 [507/740] Linking static target lib/librte_port.a 00:02:02.905 [508/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:03.164 [509/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:03.164 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:03.164 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:03.164 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:03.164 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:03.164 [514/740] Linking static target lib/librte_pdump.a 00:02:03.424 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.424 [516/740] Linking target lib/librte_port.so.23.0 00:02:03.424 [517/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.424 [518/740] Linking target lib/librte_pdump.so.23.0 00:02:03.424 [519/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:03.424 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:03.683 [521/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:03.683 [522/740] Generating lib/rte_table_def with a custom command 00:02:03.683 [523/740] Generating lib/rte_table_mingw with a custom command 00:02:03.683 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:03.943 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:03.943 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:03.943 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:03.943 [528/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:03.943 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:03.943 [530/740] Generating lib/rte_pipeline_def with a custom command 00:02:03.943 [531/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:03.943 [532/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:04.202 [533/740] Linking static target lib/librte_table.a 00:02:04.202 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:04.461 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:04.461 [536/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.461 [537/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:04.461 [538/740] Linking target lib/librte_table.so.23.0 00:02:04.721 [539/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:04.721 [540/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:04.721 [541/740] Generating lib/rte_graph_def with a custom command 00:02:04.721 [542/740] Generating lib/rte_graph_mingw with a custom command 00:02:04.721 [543/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:04.981 [544/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:04.981 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:04.981 [546/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:04.981 [547/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:04.981 [548/740] Linking static target lib/librte_graph.a 00:02:05.240 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:05.240 [550/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:05.240 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:05.240 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:05.500 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:05.500 [554/740] Generating lib/rte_node_def with a custom command 00:02:05.500 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:05.500 [556/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:05.791 [557/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.791 [558/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.791 [559/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:05.791 [560/740] Linking target lib/librte_graph.so.23.0 00:02:05.791 [561/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:05.791 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.791 [563/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:05.791 [564/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:05.791 [565/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:06.050 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:06.050 [567/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:06.050 [568/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:06.050 [569/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:06.050 [570/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:06.050 [571/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:06.050 [572/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:06.051 [573/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:06.051 [574/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:06.051 [575/740] Linking static target lib/librte_node.a 00:02:06.051 [576/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:06.051 [577/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:06.051 [578/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:06.310 [579/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.310 [580/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:06.310 [581/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:06.310 [582/740] Linking target lib/librte_node.so.23.0 00:02:06.310 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:06.310 [584/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.310 [585/740] Linking static target drivers/librte_bus_vdev.a 00:02:06.310 [586/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:06.310 [587/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.310 [588/740] Linking static target drivers/librte_bus_pci.a 00:02:06.570 [589/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.570 [590/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.570 [591/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.570 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:06.570 [593/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:06.570 [594/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:06.570 [595/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:06.570 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:06.829 [597/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.829 [598/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:06.829 [599/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:06.829 [600/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:06.829 [601/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:07.087 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:07.087 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:07.087 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.087 [605/740] Linking static target drivers/librte_mempool_ring.a 00:02:07.087 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.087 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:07.346 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:07.604 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:07.863 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:07.863 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:07.863 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:08.122 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:08.122 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:08.381 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:08.381 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:08.381 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:08.640 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:08.640 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:08.640 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:08.640 [621/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:08.899 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:09.158 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:09.417 [624/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:09.676 [625/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:09.677 [626/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:09.677 [627/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:09.677 [628/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:09.936 [629/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:09.936 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:09.936 [631/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:09.936 [632/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:09.936 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:10.195 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:10.454 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:10.454 [636/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:10.454 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:10.454 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:10.454 [639/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:10.713 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:10.713 [641/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:10.713 [642/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:10.713 [643/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:10.713 [644/740] Linking static target drivers/librte_net_i40e.a 00:02:10.713 [645/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:10.972 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:10.972 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:11.231 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:11.231 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:11.231 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:11.231 [651/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.507 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:11.507 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:11.507 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:11.507 [655/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:11.766 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:11.766 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:11.766 [658/740] Linking static target lib/librte_vhost.a 00:02:11.766 [659/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:11.766 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:11.766 [661/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:11.766 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:12.026 [663/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:12.026 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:12.026 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:12.286 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:12.286 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:12.545 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:12.545 [669/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.545 [670/740] Linking target lib/librte_vhost.so.23.0 00:02:12.805 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:12.805 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:13.064 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:13.064 [674/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:13.064 [675/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:13.324 [676/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:13.324 [677/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:13.324 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:13.324 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:13.591 [680/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:13.591 [681/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:13.591 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:13.591 [683/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:13.868 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:13.868 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:13.868 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:13.868 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:14.128 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:14.128 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:14.128 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:14.388 [691/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:14.388 [692/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:14.388 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:14.648 [694/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:14.648 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:14.648 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:14.912 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:15.176 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:15.176 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:15.176 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:15.176 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:15.746 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:15.746 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:15.746 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:15.746 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:16.013 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:16.013 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:16.272 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:16.272 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:16.532 [710/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:16.532 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:16.798 [712/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:16.798 [713/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:16.798 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:16.798 [715/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:17.058 [716/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:17.058 [717/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:17.317 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:17.577 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:18.513 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:18.773 [721/740] Linking static target lib/librte_pipeline.a 00:02:19.039 [722/740] Linking target app/dpdk-proc-info 00:02:19.039 [723/740] Linking target app/dpdk-pdump 00:02:19.039 [724/740] Linking target app/dpdk-test-cmdline 00:02:19.039 [725/740] Linking target app/dpdk-test-acl 00:02:19.039 [726/740] Linking target app/dpdk-test-bbdev 00:02:19.039 [727/740] Linking target app/dpdk-test-eventdev 00:02:19.039 [728/740] Linking target app/dpdk-dumpcap 00:02:19.039 [729/740] Linking target app/dpdk-test-compress-perf 00:02:19.307 [730/740] Linking target app/dpdk-test-crypto-perf 00:02:19.307 [731/740] Linking target app/dpdk-test-fib 00:02:19.308 [732/740] Linking target app/dpdk-test-flow-perf 00:02:19.308 [733/740] Linking target app/dpdk-test-gpudev 00:02:19.308 [734/740] Linking target app/dpdk-test-pipeline 00:02:19.567 [735/740] Linking target app/dpdk-test-regex 00:02:19.567 [736/740] Linking target app/dpdk-test-sad 00:02:19.567 [737/740] Linking target app/dpdk-testpmd 00:02:19.567 [738/740] Linking target app/dpdk-test-security-perf 00:02:23.765 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.025 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:24.025 23:42:17 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:24.025 23:42:17 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:24.025 23:42:17 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:24.025 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:24.025 [0/1] Installing files. 00:02:24.287 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.287 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.289 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.290 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:24.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:24.292 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:24.292 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:24.292 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.292 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.552 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.553 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.553 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.553 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:24.553 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.553 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:24.553 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.553 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:24.553 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:24.553 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:24.553 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.553 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.553 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.553 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.553 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.553 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.553 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.553 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.553 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.815 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.815 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.815 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.815 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.815 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.815 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.815 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.815 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.815 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.816 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.817 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:24.818 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:24.818 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:24.818 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:24.818 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:24.818 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:24.818 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:24.818 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:24.818 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:24.818 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:24.818 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:24.818 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:24.818 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:24.818 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:24.818 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:24.818 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:24.818 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:24.818 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:24.818 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:24.818 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:24.818 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:24.818 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:24.818 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:24.818 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:24.818 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:24.818 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:24.818 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:24.818 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:24.818 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:24.818 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:24.818 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:24.818 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:24.818 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:24.818 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:24.818 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:24.818 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:24.818 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:24.818 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:24.818 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:24.818 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:24.818 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:24.818 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:24.818 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:24.818 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:24.818 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:24.818 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:24.818 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:24.818 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:24.818 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:24.818 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:24.818 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:24.818 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:24.818 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:24.818 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:24.818 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:24.818 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:24.818 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:24.818 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:24.818 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:24.818 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:24.818 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:24.818 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:24.818 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:24.818 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:24.818 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:24.818 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:24.818 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:24.818 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:24.818 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:24.818 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:24.818 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:24.818 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:24.818 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:24.818 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:24.818 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:24.818 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:24.818 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:24.818 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:24.818 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:24.818 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:24.819 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:24.819 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:24.819 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:24.819 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:24.819 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:24.819 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:24.819 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:24.819 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:24.819 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:24.819 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:24.819 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:24.819 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:24.819 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:24.819 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:24.819 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:24.819 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:24.819 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:24.819 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:24.819 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:24.819 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:24.819 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:24.819 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:24.819 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:24.819 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:24.819 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:24.819 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:24.819 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:24.819 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:24.819 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:24.819 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:24.819 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:24.819 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:24.819 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:24.819 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:24.819 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:24.819 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:24.819 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:24.819 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:24.819 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:24.819 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:24.819 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:24.819 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:24.819 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:24.819 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:24.819 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:24.819 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:24.819 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:24.819 23:42:18 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:24.819 ************************************ 00:02:24.819 END TEST build_native_dpdk 00:02:24.819 ************************************ 00:02:24.819 23:42:18 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:24.819 00:02:24.819 real 0m44.805s 00:02:24.819 user 4m20.329s 00:02:24.819 sys 0m49.315s 00:02:24.819 23:42:18 build_native_dpdk -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:24.819 23:42:18 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:25.079 23:42:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:25.079 23:42:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:25.079 23:42:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:25.079 23:42:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:25.079 23:42:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:25.079 23:42:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:25.079 23:42:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:25.079 23:42:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:25.079 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:25.346 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:25.346 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:25.346 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:25.926 Using 'verbs' RDMA provider 00:02:41.753 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:59.853 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:59.853 Creating mk/config.mk...done. 00:02:59.853 Creating mk/cc.flags.mk...done. 00:02:59.853 Type 'make' to build. 00:02:59.853 23:42:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:59.853 23:42:52 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:59.853 23:42:52 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:59.853 23:42:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.853 ************************************ 00:02:59.853 START TEST make 00:02:59.853 ************************************ 00:02:59.853 23:42:52 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:59.853 make[1]: Nothing to be done for 'all'. 00:03:46.546 CC lib/log/log.o 00:03:46.546 CC lib/log/log_flags.o 00:03:46.546 CC lib/ut/ut.o 00:03:46.546 CC lib/log/log_deprecated.o 00:03:46.546 CC lib/ut_mock/mock.o 00:03:46.546 LIB libspdk_ut.a 00:03:46.546 LIB libspdk_log.a 00:03:46.546 LIB libspdk_ut_mock.a 00:03:46.546 SO libspdk_ut.so.2.0 00:03:46.546 SO libspdk_ut_mock.so.6.0 00:03:46.546 SO libspdk_log.so.7.1 00:03:46.546 SYMLINK libspdk_ut.so 00:03:46.546 SYMLINK libspdk_ut_mock.so 00:03:46.546 SYMLINK libspdk_log.so 00:03:46.546 CXX lib/trace_parser/trace.o 00:03:46.546 CC lib/ioat/ioat.o 00:03:46.546 CC lib/util/base64.o 00:03:46.546 CC lib/util/bit_array.o 00:03:46.546 CC lib/util/cpuset.o 00:03:46.546 CC lib/util/crc32.o 00:03:46.546 CC lib/util/crc16.o 00:03:46.546 CC lib/util/crc32c.o 00:03:46.546 CC lib/dma/dma.o 00:03:46.546 CC lib/vfio_user/host/vfio_user_pci.o 00:03:46.546 CC lib/util/crc32_ieee.o 00:03:46.546 CC lib/util/crc64.o 00:03:46.546 CC lib/util/dif.o 00:03:46.546 CC lib/util/fd.o 00:03:46.546 CC lib/util/fd_group.o 00:03:46.546 LIB libspdk_dma.a 00:03:46.546 CC lib/vfio_user/host/vfio_user.o 00:03:46.546 SO libspdk_dma.so.5.0 00:03:46.546 CC lib/util/file.o 00:03:46.546 CC lib/util/hexlify.o 00:03:46.546 LIB libspdk_ioat.a 00:03:46.546 CC lib/util/iov.o 00:03:46.546 SYMLINK libspdk_dma.so 00:03:46.546 CC lib/util/math.o 00:03:46.546 SO libspdk_ioat.so.7.0 00:03:46.546 CC lib/util/net.o 00:03:46.546 SYMLINK libspdk_ioat.so 00:03:46.546 CC lib/util/pipe.o 00:03:46.546 CC lib/util/strerror_tls.o 00:03:46.546 CC lib/util/string.o 00:03:46.546 LIB libspdk_vfio_user.a 00:03:46.546 CC lib/util/uuid.o 00:03:46.546 CC lib/util/xor.o 00:03:46.546 SO libspdk_vfio_user.so.5.0 00:03:46.546 CC lib/util/zipf.o 00:03:46.546 CC lib/util/md5.o 00:03:46.546 SYMLINK libspdk_vfio_user.so 00:03:46.546 LIB libspdk_util.a 00:03:46.546 SO libspdk_util.so.10.0 00:03:46.546 LIB libspdk_trace_parser.a 00:03:46.546 SYMLINK libspdk_util.so 00:03:46.546 SO libspdk_trace_parser.so.6.0 00:03:46.546 SYMLINK libspdk_trace_parser.so 00:03:46.546 CC lib/env_dpdk/env.o 00:03:46.546 CC lib/env_dpdk/memory.o 00:03:46.546 CC lib/rdma_provider/common.o 00:03:46.546 CC lib/env_dpdk/pci.o 00:03:46.546 CC lib/env_dpdk/init.o 00:03:46.546 CC lib/idxd/idxd.o 00:03:46.546 CC lib/conf/conf.o 00:03:46.546 CC lib/vmd/vmd.o 00:03:46.546 CC lib/json/json_parse.o 00:03:46.546 CC lib/rdma_utils/rdma_utils.o 00:03:46.546 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:46.546 LIB libspdk_conf.a 00:03:46.546 CC lib/json/json_util.o 00:03:46.546 SO libspdk_conf.so.6.0 00:03:46.546 LIB libspdk_rdma_utils.a 00:03:46.546 SYMLINK libspdk_conf.so 00:03:46.546 SO libspdk_rdma_utils.so.1.0 00:03:46.546 CC lib/vmd/led.o 00:03:46.546 CC lib/env_dpdk/threads.o 00:03:46.546 LIB libspdk_rdma_provider.a 00:03:46.546 SYMLINK libspdk_rdma_utils.so 00:03:46.546 CC lib/idxd/idxd_user.o 00:03:46.546 CC lib/env_dpdk/pci_ioat.o 00:03:46.546 SO libspdk_rdma_provider.so.6.0 00:03:46.546 CC lib/env_dpdk/pci_virtio.o 00:03:46.546 SYMLINK libspdk_rdma_provider.so 00:03:46.546 CC lib/json/json_write.o 00:03:46.546 CC lib/env_dpdk/pci_vmd.o 00:03:46.546 CC lib/env_dpdk/pci_idxd.o 00:03:46.546 CC lib/env_dpdk/pci_event.o 00:03:46.546 CC lib/env_dpdk/sigbus_handler.o 00:03:46.546 CC lib/env_dpdk/pci_dpdk.o 00:03:46.546 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:46.546 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:46.546 CC lib/idxd/idxd_kernel.o 00:03:46.546 LIB libspdk_vmd.a 00:03:46.546 LIB libspdk_json.a 00:03:46.546 SO libspdk_vmd.so.6.0 00:03:46.546 LIB libspdk_idxd.a 00:03:46.546 SO libspdk_json.so.6.0 00:03:46.546 SYMLINK libspdk_vmd.so 00:03:46.546 SO libspdk_idxd.so.12.1 00:03:46.547 SYMLINK libspdk_json.so 00:03:46.547 SYMLINK libspdk_idxd.so 00:03:46.547 CC lib/jsonrpc/jsonrpc_server.o 00:03:46.547 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:46.547 CC lib/jsonrpc/jsonrpc_client.o 00:03:46.547 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:46.547 LIB libspdk_env_dpdk.a 00:03:46.547 LIB libspdk_jsonrpc.a 00:03:46.547 SO libspdk_jsonrpc.so.6.0 00:03:46.547 SO libspdk_env_dpdk.so.15.1 00:03:46.547 SYMLINK libspdk_jsonrpc.so 00:03:46.547 SYMLINK libspdk_env_dpdk.so 00:03:46.547 CC lib/rpc/rpc.o 00:03:46.547 LIB libspdk_rpc.a 00:03:46.547 SO libspdk_rpc.so.6.0 00:03:46.547 SYMLINK libspdk_rpc.so 00:03:46.547 CC lib/notify/notify.o 00:03:46.547 CC lib/notify/notify_rpc.o 00:03:46.547 CC lib/trace/trace.o 00:03:46.547 CC lib/trace/trace_rpc.o 00:03:46.547 CC lib/trace/trace_flags.o 00:03:46.547 CC lib/keyring/keyring_rpc.o 00:03:46.547 CC lib/keyring/keyring.o 00:03:46.547 LIB libspdk_notify.a 00:03:46.547 SO libspdk_notify.so.6.0 00:03:46.547 SYMLINK libspdk_notify.so 00:03:46.547 LIB libspdk_trace.a 00:03:46.547 LIB libspdk_keyring.a 00:03:46.547 SO libspdk_trace.so.11.0 00:03:46.547 SO libspdk_keyring.so.2.0 00:03:46.806 SYMLINK libspdk_trace.so 00:03:46.806 SYMLINK libspdk_keyring.so 00:03:47.065 CC lib/sock/sock_rpc.o 00:03:47.065 CC lib/sock/sock.o 00:03:47.065 CC lib/thread/thread.o 00:03:47.065 CC lib/thread/iobuf.o 00:03:47.632 LIB libspdk_sock.a 00:03:47.632 SO libspdk_sock.so.10.0 00:03:47.632 SYMLINK libspdk_sock.so 00:03:48.200 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:48.200 CC lib/nvme/nvme_ctrlr.o 00:03:48.200 CC lib/nvme/nvme_fabric.o 00:03:48.200 CC lib/nvme/nvme_ns_cmd.o 00:03:48.200 CC lib/nvme/nvme_pcie_common.o 00:03:48.200 CC lib/nvme/nvme_ns.o 00:03:48.200 CC lib/nvme/nvme_qpair.o 00:03:48.200 CC lib/nvme/nvme_pcie.o 00:03:48.200 CC lib/nvme/nvme.o 00:03:48.768 LIB libspdk_thread.a 00:03:48.768 SO libspdk_thread.so.11.0 00:03:48.768 CC lib/nvme/nvme_quirks.o 00:03:48.768 CC lib/nvme/nvme_transport.o 00:03:48.768 SYMLINK libspdk_thread.so 00:03:48.768 CC lib/nvme/nvme_discovery.o 00:03:48.768 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.027 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.027 CC lib/nvme/nvme_tcp.o 00:03:49.027 CC lib/nvme/nvme_opal.o 00:03:49.027 CC lib/accel/accel.o 00:03:49.286 CC lib/blob/blobstore.o 00:03:49.286 CC lib/nvme/nvme_io_msg.o 00:03:49.286 CC lib/accel/accel_rpc.o 00:03:49.545 CC lib/init/json_config.o 00:03:49.545 CC lib/init/subsystem.o 00:03:49.545 CC lib/init/subsystem_rpc.o 00:03:49.545 CC lib/init/rpc.o 00:03:49.545 CC lib/blob/request.o 00:03:49.545 CC lib/blob/zeroes.o 00:03:49.545 CC lib/blob/blob_bs_dev.o 00:03:49.804 LIB libspdk_init.a 00:03:49.804 CC lib/accel/accel_sw.o 00:03:49.804 SO libspdk_init.so.6.0 00:03:49.804 SYMLINK libspdk_init.so 00:03:49.804 CC lib/virtio/virtio.o 00:03:49.804 CC lib/nvme/nvme_poll_group.o 00:03:49.804 CC lib/nvme/nvme_zns.o 00:03:49.804 CC lib/virtio/virtio_vhost_user.o 00:03:50.063 CC lib/fsdev/fsdev.o 00:03:50.063 CC lib/virtio/virtio_vfio_user.o 00:03:50.322 CC lib/virtio/virtio_pci.o 00:03:50.322 LIB libspdk_accel.a 00:03:50.322 CC lib/nvme/nvme_stubs.o 00:03:50.322 SO libspdk_accel.so.16.0 00:03:50.322 SYMLINK libspdk_accel.so 00:03:50.322 CC lib/fsdev/fsdev_io.o 00:03:50.322 CC lib/nvme/nvme_auth.o 00:03:50.322 CC lib/fsdev/fsdev_rpc.o 00:03:50.580 LIB libspdk_virtio.a 00:03:50.580 CC lib/event/app.o 00:03:50.580 SO libspdk_virtio.so.7.0 00:03:50.580 CC lib/nvme/nvme_cuse.o 00:03:50.580 CC lib/event/reactor.o 00:03:50.580 CC lib/bdev/bdev.o 00:03:50.580 SYMLINK libspdk_virtio.so 00:03:50.580 CC lib/bdev/bdev_rpc.o 00:03:50.580 CC lib/bdev/bdev_zone.o 00:03:50.580 LIB libspdk_fsdev.a 00:03:50.839 SO libspdk_fsdev.so.2.0 00:03:50.839 CC lib/nvme/nvme_rdma.o 00:03:50.839 SYMLINK libspdk_fsdev.so 00:03:50.839 CC lib/event/log_rpc.o 00:03:50.839 CC lib/bdev/part.o 00:03:50.839 CC lib/bdev/scsi_nvme.o 00:03:50.839 CC lib/event/app_rpc.o 00:03:50.839 CC lib/event/scheduler_static.o 00:03:51.097 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:51.098 LIB libspdk_event.a 00:03:51.356 SO libspdk_event.so.14.0 00:03:51.356 SYMLINK libspdk_event.so 00:03:51.615 LIB libspdk_fuse_dispatcher.a 00:03:51.886 SO libspdk_fuse_dispatcher.so.1.0 00:03:51.886 SYMLINK libspdk_fuse_dispatcher.so 00:03:52.145 LIB libspdk_nvme.a 00:03:52.404 SO libspdk_nvme.so.14.1 00:03:52.664 SYMLINK libspdk_nvme.so 00:03:52.664 LIB libspdk_blob.a 00:03:52.664 SO libspdk_blob.so.11.0 00:03:52.923 SYMLINK libspdk_blob.so 00:03:53.182 LIB libspdk_bdev.a 00:03:53.182 CC lib/lvol/lvol.o 00:03:53.182 CC lib/blobfs/blobfs.o 00:03:53.182 CC lib/blobfs/tree.o 00:03:53.182 SO libspdk_bdev.so.17.0 00:03:53.441 SYMLINK libspdk_bdev.so 00:03:53.699 CC lib/nvmf/ctrlr.o 00:03:53.699 CC lib/nvmf/ctrlr_discovery.o 00:03:53.699 CC lib/nvmf/subsystem.o 00:03:53.699 CC lib/nvmf/ctrlr_bdev.o 00:03:53.699 CC lib/ftl/ftl_core.o 00:03:53.699 CC lib/nbd/nbd.o 00:03:53.699 CC lib/scsi/dev.o 00:03:53.699 CC lib/ublk/ublk.o 00:03:53.958 CC lib/scsi/lun.o 00:03:53.958 CC lib/ftl/ftl_init.o 00:03:53.958 CC lib/nbd/nbd_rpc.o 00:03:53.958 LIB libspdk_blobfs.a 00:03:54.216 SO libspdk_blobfs.so.10.0 00:03:54.216 CC lib/scsi/port.o 00:03:54.216 CC lib/nvmf/nvmf.o 00:03:54.216 SYMLINK libspdk_blobfs.so 00:03:54.216 CC lib/nvmf/nvmf_rpc.o 00:03:54.216 LIB libspdk_lvol.a 00:03:54.216 LIB libspdk_nbd.a 00:03:54.216 CC lib/ftl/ftl_layout.o 00:03:54.216 SO libspdk_lvol.so.10.0 00:03:54.216 SO libspdk_nbd.so.7.0 00:03:54.216 SYMLINK libspdk_lvol.so 00:03:54.216 SYMLINK libspdk_nbd.so 00:03:54.216 CC lib/nvmf/transport.o 00:03:54.216 CC lib/scsi/scsi.o 00:03:54.216 CC lib/ublk/ublk_rpc.o 00:03:54.216 CC lib/scsi/scsi_bdev.o 00:03:54.216 CC lib/nvmf/tcp.o 00:03:54.483 CC lib/scsi/scsi_pr.o 00:03:54.483 LIB libspdk_ublk.a 00:03:54.483 SO libspdk_ublk.so.3.0 00:03:54.483 CC lib/ftl/ftl_debug.o 00:03:54.483 SYMLINK libspdk_ublk.so 00:03:54.483 CC lib/ftl/ftl_io.o 00:03:54.749 CC lib/ftl/ftl_sb.o 00:03:54.749 CC lib/ftl/ftl_l2p.o 00:03:54.749 CC lib/scsi/scsi_rpc.o 00:03:54.749 CC lib/scsi/task.o 00:03:55.016 CC lib/nvmf/stubs.o 00:03:55.016 CC lib/nvmf/mdns_server.o 00:03:55.016 CC lib/ftl/ftl_l2p_flat.o 00:03:55.016 CC lib/ftl/ftl_nv_cache.o 00:03:55.016 CC lib/nvmf/rdma.o 00:03:55.016 CC lib/nvmf/auth.o 00:03:55.016 LIB libspdk_scsi.a 00:03:55.016 CC lib/ftl/ftl_band.o 00:03:55.016 SO libspdk_scsi.so.9.0 00:03:55.016 CC lib/ftl/ftl_band_ops.o 00:03:55.274 SYMLINK libspdk_scsi.so 00:03:55.274 CC lib/ftl/ftl_writer.o 00:03:55.274 CC lib/ftl/ftl_rq.o 00:03:55.533 CC lib/ftl/ftl_reloc.o 00:03:55.533 CC lib/ftl/ftl_l2p_cache.o 00:03:55.533 CC lib/iscsi/conn.o 00:03:55.533 CC lib/ftl/ftl_p2l.o 00:03:55.533 CC lib/ftl/ftl_p2l_log.o 00:03:55.533 CC lib/vhost/vhost.o 00:03:55.792 CC lib/ftl/mngt/ftl_mngt.o 00:03:55.792 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:55.792 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:55.792 CC lib/vhost/vhost_rpc.o 00:03:56.050 CC lib/vhost/vhost_scsi.o 00:03:56.050 CC lib/vhost/vhost_blk.o 00:03:56.050 CC lib/iscsi/init_grp.o 00:03:56.050 CC lib/iscsi/iscsi.o 00:03:56.050 CC lib/iscsi/param.o 00:03:56.050 CC lib/iscsi/portal_grp.o 00:03:56.050 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:56.308 CC lib/vhost/rte_vhost_user.o 00:03:56.308 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:56.308 CC lib/iscsi/tgt_node.o 00:03:56.308 CC lib/iscsi/iscsi_subsystem.o 00:03:56.308 CC lib/iscsi/iscsi_rpc.o 00:03:56.567 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:56.567 CC lib/iscsi/task.o 00:03:56.826 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:56.826 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:56.826 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:56.826 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:56.826 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:56.826 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:56.826 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:56.826 CC lib/ftl/utils/ftl_conf.o 00:03:56.826 CC lib/ftl/utils/ftl_md.o 00:03:57.085 CC lib/ftl/utils/ftl_mempool.o 00:03:57.085 CC lib/ftl/utils/ftl_bitmap.o 00:03:57.085 CC lib/ftl/utils/ftl_property.o 00:03:57.085 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:57.085 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:57.085 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:57.085 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:57.344 LIB libspdk_vhost.a 00:03:57.344 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:57.344 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:57.344 SO libspdk_vhost.so.8.0 00:03:57.344 LIB libspdk_nvmf.a 00:03:57.344 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:57.344 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:57.344 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:57.344 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:57.344 SYMLINK libspdk_vhost.so 00:03:57.344 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:57.344 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:57.344 SO libspdk_nvmf.so.20.0 00:03:57.603 CC lib/ftl/base/ftl_base_dev.o 00:03:57.603 CC lib/ftl/base/ftl_base_bdev.o 00:03:57.603 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:57.603 LIB libspdk_iscsi.a 00:03:57.603 CC lib/ftl/ftl_trace.o 00:03:57.603 SO libspdk_iscsi.so.8.0 00:03:57.603 SYMLINK libspdk_nvmf.so 00:03:57.862 SYMLINK libspdk_iscsi.so 00:03:57.862 LIB libspdk_ftl.a 00:03:58.122 SO libspdk_ftl.so.9.0 00:03:58.381 SYMLINK libspdk_ftl.so 00:03:58.639 CC module/env_dpdk/env_dpdk_rpc.o 00:03:58.639 CC module/keyring/linux/keyring.o 00:03:58.639 CC module/sock/posix/posix.o 00:03:58.639 CC module/accel/dsa/accel_dsa.o 00:03:58.639 CC module/blob/bdev/blob_bdev.o 00:03:58.639 CC module/fsdev/aio/fsdev_aio.o 00:03:58.639 CC module/accel/error/accel_error.o 00:03:58.639 CC module/keyring/file/keyring.o 00:03:58.639 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:58.639 CC module/accel/ioat/accel_ioat.o 00:03:58.640 LIB libspdk_env_dpdk_rpc.a 00:03:58.898 SO libspdk_env_dpdk_rpc.so.6.0 00:03:58.898 SYMLINK libspdk_env_dpdk_rpc.so 00:03:58.898 CC module/accel/ioat/accel_ioat_rpc.o 00:03:58.898 CC module/keyring/linux/keyring_rpc.o 00:03:58.898 CC module/keyring/file/keyring_rpc.o 00:03:58.898 CC module/accel/error/accel_error_rpc.o 00:03:58.898 LIB libspdk_scheduler_dynamic.a 00:03:58.898 LIB libspdk_accel_ioat.a 00:03:58.898 SO libspdk_scheduler_dynamic.so.4.0 00:03:58.898 LIB libspdk_keyring_linux.a 00:03:58.898 LIB libspdk_blob_bdev.a 00:03:58.898 SO libspdk_keyring_linux.so.1.0 00:03:58.898 LIB libspdk_keyring_file.a 00:03:58.898 SO libspdk_accel_ioat.so.6.0 00:03:58.898 CC module/accel/dsa/accel_dsa_rpc.o 00:03:58.898 LIB libspdk_accel_error.a 00:03:58.898 SO libspdk_blob_bdev.so.11.0 00:03:58.898 SO libspdk_keyring_file.so.2.0 00:03:59.157 SYMLINK libspdk_scheduler_dynamic.so 00:03:59.157 SO libspdk_accel_error.so.2.0 00:03:59.157 SYMLINK libspdk_accel_ioat.so 00:03:59.157 SYMLINK libspdk_keyring_linux.so 00:03:59.157 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:59.157 SYMLINK libspdk_blob_bdev.so 00:03:59.157 SYMLINK libspdk_keyring_file.so 00:03:59.157 CC module/fsdev/aio/linux_aio_mgr.o 00:03:59.157 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:59.157 SYMLINK libspdk_accel_error.so 00:03:59.157 LIB libspdk_accel_dsa.a 00:03:59.157 SO libspdk_accel_dsa.so.5.0 00:03:59.157 CC module/scheduler/gscheduler/gscheduler.o 00:03:59.157 CC module/accel/iaa/accel_iaa.o 00:03:59.157 CC module/accel/iaa/accel_iaa_rpc.o 00:03:59.157 SYMLINK libspdk_accel_dsa.so 00:03:59.157 LIB libspdk_scheduler_dpdk_governor.a 00:03:59.157 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:59.416 CC module/bdev/delay/vbdev_delay.o 00:03:59.416 CC module/blobfs/bdev/blobfs_bdev.o 00:03:59.416 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:59.416 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:59.416 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:59.416 LIB libspdk_scheduler_gscheduler.a 00:03:59.416 SO libspdk_scheduler_gscheduler.so.4.0 00:03:59.416 CC module/bdev/error/vbdev_error.o 00:03:59.416 LIB libspdk_accel_iaa.a 00:03:59.416 CC module/bdev/gpt/gpt.o 00:03:59.416 SO libspdk_accel_iaa.so.3.0 00:03:59.416 SYMLINK libspdk_scheduler_gscheduler.so 00:03:59.416 CC module/bdev/error/vbdev_error_rpc.o 00:03:59.416 LIB libspdk_fsdev_aio.a 00:03:59.416 LIB libspdk_sock_posix.a 00:03:59.416 SYMLINK libspdk_accel_iaa.so 00:03:59.416 CC module/bdev/gpt/vbdev_gpt.o 00:03:59.416 LIB libspdk_blobfs_bdev.a 00:03:59.416 SO libspdk_fsdev_aio.so.1.0 00:03:59.416 SO libspdk_sock_posix.so.6.0 00:03:59.416 SO libspdk_blobfs_bdev.so.6.0 00:03:59.675 SYMLINK libspdk_fsdev_aio.so 00:03:59.675 SYMLINK libspdk_blobfs_bdev.so 00:03:59.675 SYMLINK libspdk_sock_posix.so 00:03:59.675 CC module/bdev/lvol/vbdev_lvol.o 00:03:59.675 CC module/bdev/malloc/bdev_malloc.o 00:03:59.675 LIB libspdk_bdev_error.a 00:03:59.675 LIB libspdk_bdev_delay.a 00:03:59.675 SO libspdk_bdev_error.so.6.0 00:03:59.675 SO libspdk_bdev_delay.so.6.0 00:03:59.675 CC module/bdev/null/bdev_null.o 00:03:59.675 CC module/bdev/nvme/bdev_nvme.o 00:03:59.675 CC module/bdev/raid/bdev_raid.o 00:03:59.675 CC module/bdev/passthru/vbdev_passthru.o 00:03:59.675 CC module/bdev/split/vbdev_split.o 00:03:59.675 LIB libspdk_bdev_gpt.a 00:03:59.675 SYMLINK libspdk_bdev_error.so 00:03:59.675 CC module/bdev/raid/bdev_raid_rpc.o 00:03:59.675 SYMLINK libspdk_bdev_delay.so 00:03:59.675 CC module/bdev/raid/bdev_raid_sb.o 00:03:59.675 SO libspdk_bdev_gpt.so.6.0 00:03:59.934 SYMLINK libspdk_bdev_gpt.so 00:03:59.934 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:59.934 CC module/bdev/nvme/nvme_rpc.o 00:03:59.934 CC module/bdev/split/vbdev_split_rpc.o 00:03:59.934 CC module/bdev/null/bdev_null_rpc.o 00:03:59.934 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:59.934 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:59.934 CC module/bdev/raid/raid0.o 00:04:00.193 LIB libspdk_bdev_split.a 00:04:00.193 LIB libspdk_bdev_null.a 00:04:00.193 SO libspdk_bdev_split.so.6.0 00:04:00.193 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:00.193 SO libspdk_bdev_null.so.6.0 00:04:00.193 LIB libspdk_bdev_passthru.a 00:04:00.193 CC module/bdev/nvme/bdev_mdns_client.o 00:04:00.193 LIB libspdk_bdev_malloc.a 00:04:00.193 SO libspdk_bdev_passthru.so.6.0 00:04:00.193 SYMLINK libspdk_bdev_split.so 00:04:00.193 SYMLINK libspdk_bdev_null.so 00:04:00.193 SO libspdk_bdev_malloc.so.6.0 00:04:00.193 SYMLINK libspdk_bdev_passthru.so 00:04:00.193 SYMLINK libspdk_bdev_malloc.so 00:04:00.450 CC module/bdev/nvme/vbdev_opal.o 00:04:00.450 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:00.450 CC module/bdev/aio/bdev_aio.o 00:04:00.450 CC module/bdev/ftl/bdev_ftl.o 00:04:00.450 CC module/bdev/iscsi/bdev_iscsi.o 00:04:00.450 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:00.450 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:00.450 LIB libspdk_bdev_lvol.a 00:04:00.450 SO libspdk_bdev_lvol.so.6.0 00:04:00.708 SYMLINK libspdk_bdev_lvol.so 00:04:00.709 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:00.709 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:00.709 CC module/bdev/raid/raid1.o 00:04:00.709 CC module/bdev/raid/concat.o 00:04:00.709 CC module/bdev/aio/bdev_aio_rpc.o 00:04:00.709 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:00.709 LIB libspdk_bdev_iscsi.a 00:04:00.709 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:00.709 LIB libspdk_bdev_ftl.a 00:04:00.709 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:00.709 SO libspdk_bdev_iscsi.so.6.0 00:04:00.709 SO libspdk_bdev_ftl.so.6.0 00:04:00.709 LIB libspdk_bdev_aio.a 00:04:00.968 LIB libspdk_bdev_zone_block.a 00:04:00.968 SYMLINK libspdk_bdev_iscsi.so 00:04:00.968 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:00.968 SO libspdk_bdev_zone_block.so.6.0 00:04:00.968 SO libspdk_bdev_aio.so.6.0 00:04:00.968 SYMLINK libspdk_bdev_ftl.so 00:04:00.968 CC module/bdev/raid/raid5f.o 00:04:00.968 SYMLINK libspdk_bdev_zone_block.so 00:04:00.968 SYMLINK libspdk_bdev_aio.so 00:04:00.968 LIB libspdk_bdev_virtio.a 00:04:01.226 SO libspdk_bdev_virtio.so.6.0 00:04:01.226 SYMLINK libspdk_bdev_virtio.so 00:04:01.485 LIB libspdk_bdev_raid.a 00:04:01.485 SO libspdk_bdev_raid.so.6.0 00:04:01.485 SYMLINK libspdk_bdev_raid.so 00:04:02.424 LIB libspdk_bdev_nvme.a 00:04:02.424 SO libspdk_bdev_nvme.so.7.1 00:04:02.424 SYMLINK libspdk_bdev_nvme.so 00:04:02.993 CC module/event/subsystems/keyring/keyring.o 00:04:02.993 CC module/event/subsystems/iobuf/iobuf.o 00:04:02.993 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:02.993 CC module/event/subsystems/scheduler/scheduler.o 00:04:02.993 CC module/event/subsystems/sock/sock.o 00:04:02.993 CC module/event/subsystems/vmd/vmd.o 00:04:02.993 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:02.993 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:02.993 CC module/event/subsystems/fsdev/fsdev.o 00:04:03.253 LIB libspdk_event_sock.a 00:04:03.253 LIB libspdk_event_vhost_blk.a 00:04:03.253 LIB libspdk_event_scheduler.a 00:04:03.253 LIB libspdk_event_keyring.a 00:04:03.253 LIB libspdk_event_vmd.a 00:04:03.253 LIB libspdk_event_iobuf.a 00:04:03.253 LIB libspdk_event_fsdev.a 00:04:03.253 SO libspdk_event_sock.so.5.0 00:04:03.253 SO libspdk_event_vhost_blk.so.3.0 00:04:03.253 SO libspdk_event_keyring.so.1.0 00:04:03.253 SO libspdk_event_scheduler.so.4.0 00:04:03.253 SO libspdk_event_fsdev.so.1.0 00:04:03.253 SO libspdk_event_vmd.so.6.0 00:04:03.253 SO libspdk_event_iobuf.so.3.0 00:04:03.253 SYMLINK libspdk_event_sock.so 00:04:03.253 SYMLINK libspdk_event_scheduler.so 00:04:03.253 SYMLINK libspdk_event_keyring.so 00:04:03.253 SYMLINK libspdk_event_vhost_blk.so 00:04:03.253 SYMLINK libspdk_event_fsdev.so 00:04:03.253 SYMLINK libspdk_event_vmd.so 00:04:03.253 SYMLINK libspdk_event_iobuf.so 00:04:03.512 CC module/event/subsystems/accel/accel.o 00:04:03.771 LIB libspdk_event_accel.a 00:04:03.771 SO libspdk_event_accel.so.6.0 00:04:04.031 SYMLINK libspdk_event_accel.so 00:04:04.290 CC module/event/subsystems/bdev/bdev.o 00:04:04.550 LIB libspdk_event_bdev.a 00:04:04.550 SO libspdk_event_bdev.so.6.0 00:04:04.550 SYMLINK libspdk_event_bdev.so 00:04:04.810 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:04.810 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:04.810 CC module/event/subsystems/nbd/nbd.o 00:04:04.810 CC module/event/subsystems/ublk/ublk.o 00:04:04.810 CC module/event/subsystems/scsi/scsi.o 00:04:05.069 LIB libspdk_event_nbd.a 00:04:05.069 LIB libspdk_event_ublk.a 00:04:05.069 SO libspdk_event_nbd.so.6.0 00:04:05.069 LIB libspdk_event_scsi.a 00:04:05.069 SO libspdk_event_scsi.so.6.0 00:04:05.069 SO libspdk_event_ublk.so.3.0 00:04:05.069 SYMLINK libspdk_event_nbd.so 00:04:05.069 SYMLINK libspdk_event_ublk.so 00:04:05.069 SYMLINK libspdk_event_scsi.so 00:04:05.069 LIB libspdk_event_nvmf.a 00:04:05.329 SO libspdk_event_nvmf.so.6.0 00:04:05.329 SYMLINK libspdk_event_nvmf.so 00:04:05.591 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:05.591 CC module/event/subsystems/iscsi/iscsi.o 00:04:05.591 LIB libspdk_event_vhost_scsi.a 00:04:05.591 LIB libspdk_event_iscsi.a 00:04:05.853 SO libspdk_event_vhost_scsi.so.3.0 00:04:05.853 SO libspdk_event_iscsi.so.6.0 00:04:05.853 SYMLINK libspdk_event_vhost_scsi.so 00:04:05.853 SYMLINK libspdk_event_iscsi.so 00:04:06.114 SO libspdk.so.6.0 00:04:06.115 SYMLINK libspdk.so 00:04:06.395 CC test/rpc_client/rpc_client_test.o 00:04:06.395 CXX app/trace/trace.o 00:04:06.395 TEST_HEADER include/spdk/accel.h 00:04:06.395 TEST_HEADER include/spdk/accel_module.h 00:04:06.395 TEST_HEADER include/spdk/assert.h 00:04:06.395 TEST_HEADER include/spdk/barrier.h 00:04:06.395 TEST_HEADER include/spdk/base64.h 00:04:06.395 TEST_HEADER include/spdk/bdev.h 00:04:06.395 TEST_HEADER include/spdk/bdev_module.h 00:04:06.395 TEST_HEADER include/spdk/bdev_zone.h 00:04:06.395 TEST_HEADER include/spdk/bit_array.h 00:04:06.395 TEST_HEADER include/spdk/bit_pool.h 00:04:06.395 TEST_HEADER include/spdk/blob_bdev.h 00:04:06.395 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:06.395 TEST_HEADER include/spdk/blobfs.h 00:04:06.395 TEST_HEADER include/spdk/blob.h 00:04:06.395 TEST_HEADER include/spdk/conf.h 00:04:06.395 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:06.395 TEST_HEADER include/spdk/config.h 00:04:06.395 TEST_HEADER include/spdk/cpuset.h 00:04:06.395 TEST_HEADER include/spdk/crc16.h 00:04:06.395 TEST_HEADER include/spdk/crc32.h 00:04:06.395 TEST_HEADER include/spdk/crc64.h 00:04:06.395 TEST_HEADER include/spdk/dif.h 00:04:06.395 TEST_HEADER include/spdk/dma.h 00:04:06.395 TEST_HEADER include/spdk/endian.h 00:04:06.395 TEST_HEADER include/spdk/env_dpdk.h 00:04:06.395 TEST_HEADER include/spdk/env.h 00:04:06.395 TEST_HEADER include/spdk/event.h 00:04:06.395 TEST_HEADER include/spdk/fd_group.h 00:04:06.395 TEST_HEADER include/spdk/fd.h 00:04:06.395 TEST_HEADER include/spdk/file.h 00:04:06.395 TEST_HEADER include/spdk/fsdev.h 00:04:06.395 TEST_HEADER include/spdk/fsdev_module.h 00:04:06.395 CC test/thread/poller_perf/poller_perf.o 00:04:06.395 TEST_HEADER include/spdk/ftl.h 00:04:06.395 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:06.395 TEST_HEADER include/spdk/gpt_spec.h 00:04:06.395 TEST_HEADER include/spdk/hexlify.h 00:04:06.395 TEST_HEADER include/spdk/histogram_data.h 00:04:06.395 TEST_HEADER include/spdk/idxd.h 00:04:06.395 TEST_HEADER include/spdk/idxd_spec.h 00:04:06.395 CC examples/util/zipf/zipf.o 00:04:06.395 TEST_HEADER include/spdk/init.h 00:04:06.395 TEST_HEADER include/spdk/ioat.h 00:04:06.395 TEST_HEADER include/spdk/ioat_spec.h 00:04:06.395 TEST_HEADER include/spdk/iscsi_spec.h 00:04:06.395 TEST_HEADER include/spdk/json.h 00:04:06.395 TEST_HEADER include/spdk/jsonrpc.h 00:04:06.395 TEST_HEADER include/spdk/keyring.h 00:04:06.395 TEST_HEADER include/spdk/keyring_module.h 00:04:06.395 CC examples/ioat/perf/perf.o 00:04:06.395 TEST_HEADER include/spdk/likely.h 00:04:06.395 TEST_HEADER include/spdk/log.h 00:04:06.395 TEST_HEADER include/spdk/lvol.h 00:04:06.395 TEST_HEADER include/spdk/md5.h 00:04:06.395 CC test/dma/test_dma/test_dma.o 00:04:06.395 TEST_HEADER include/spdk/memory.h 00:04:06.395 TEST_HEADER include/spdk/mmio.h 00:04:06.395 TEST_HEADER include/spdk/nbd.h 00:04:06.395 TEST_HEADER include/spdk/net.h 00:04:06.395 CC test/app/bdev_svc/bdev_svc.o 00:04:06.395 TEST_HEADER include/spdk/notify.h 00:04:06.395 TEST_HEADER include/spdk/nvme.h 00:04:06.395 TEST_HEADER include/spdk/nvme_intel.h 00:04:06.395 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:06.395 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:06.395 TEST_HEADER include/spdk/nvme_spec.h 00:04:06.395 TEST_HEADER include/spdk/nvme_zns.h 00:04:06.395 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:06.395 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:06.395 TEST_HEADER include/spdk/nvmf.h 00:04:06.395 TEST_HEADER include/spdk/nvmf_spec.h 00:04:06.395 TEST_HEADER include/spdk/nvmf_transport.h 00:04:06.395 TEST_HEADER include/spdk/opal.h 00:04:06.395 TEST_HEADER include/spdk/opal_spec.h 00:04:06.395 TEST_HEADER include/spdk/pci_ids.h 00:04:06.395 TEST_HEADER include/spdk/pipe.h 00:04:06.395 TEST_HEADER include/spdk/queue.h 00:04:06.395 TEST_HEADER include/spdk/reduce.h 00:04:06.395 TEST_HEADER include/spdk/rpc.h 00:04:06.395 TEST_HEADER include/spdk/scheduler.h 00:04:06.395 TEST_HEADER include/spdk/scsi.h 00:04:06.395 TEST_HEADER include/spdk/scsi_spec.h 00:04:06.665 TEST_HEADER include/spdk/sock.h 00:04:06.665 TEST_HEADER include/spdk/stdinc.h 00:04:06.665 CC test/env/mem_callbacks/mem_callbacks.o 00:04:06.665 TEST_HEADER include/spdk/string.h 00:04:06.665 LINK rpc_client_test 00:04:06.665 TEST_HEADER include/spdk/thread.h 00:04:06.665 TEST_HEADER include/spdk/trace.h 00:04:06.665 TEST_HEADER include/spdk/trace_parser.h 00:04:06.665 TEST_HEADER include/spdk/tree.h 00:04:06.665 TEST_HEADER include/spdk/ublk.h 00:04:06.665 TEST_HEADER include/spdk/util.h 00:04:06.665 TEST_HEADER include/spdk/uuid.h 00:04:06.665 TEST_HEADER include/spdk/version.h 00:04:06.665 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:06.665 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:06.665 TEST_HEADER include/spdk/vhost.h 00:04:06.665 TEST_HEADER include/spdk/vmd.h 00:04:06.665 TEST_HEADER include/spdk/xor.h 00:04:06.665 TEST_HEADER include/spdk/zipf.h 00:04:06.665 CXX test/cpp_headers/accel.o 00:04:06.665 LINK poller_perf 00:04:06.665 LINK interrupt_tgt 00:04:06.665 LINK zipf 00:04:06.665 LINK bdev_svc 00:04:06.665 LINK ioat_perf 00:04:06.665 CXX test/cpp_headers/accel_module.o 00:04:06.665 LINK spdk_trace 00:04:06.665 LINK mem_callbacks 00:04:06.923 CC app/trace_record/trace_record.o 00:04:06.923 CC app/nvmf_tgt/nvmf_main.o 00:04:06.923 CXX test/cpp_headers/assert.o 00:04:06.923 CC app/iscsi_tgt/iscsi_tgt.o 00:04:06.923 CC examples/ioat/verify/verify.o 00:04:06.923 CC app/spdk_tgt/spdk_tgt.o 00:04:06.923 CC test/env/vtophys/vtophys.o 00:04:06.923 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:06.923 LINK test_dma 00:04:07.183 CXX test/cpp_headers/barrier.o 00:04:07.183 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:07.183 LINK spdk_trace_record 00:04:07.183 LINK nvmf_tgt 00:04:07.183 LINK iscsi_tgt 00:04:07.183 LINK vtophys 00:04:07.183 LINK spdk_tgt 00:04:07.183 LINK verify 00:04:07.183 LINK env_dpdk_post_init 00:04:07.183 CXX test/cpp_headers/base64.o 00:04:07.183 CXX test/cpp_headers/bdev.o 00:04:07.183 CXX test/cpp_headers/bdev_module.o 00:04:07.183 CXX test/cpp_headers/bdev_zone.o 00:04:07.442 CXX test/cpp_headers/bit_array.o 00:04:07.442 CXX test/cpp_headers/bit_pool.o 00:04:07.442 CC test/env/memory/memory_ut.o 00:04:07.442 CC app/spdk_lspci/spdk_lspci.o 00:04:07.442 CC app/spdk_nvme_perf/perf.o 00:04:07.442 CXX test/cpp_headers/blob_bdev.o 00:04:07.442 CXX test/cpp_headers/blobfs_bdev.o 00:04:07.442 CC app/spdk_nvme_identify/identify.o 00:04:07.442 CC examples/thread/thread/thread_ex.o 00:04:07.442 LINK nvme_fuzz 00:04:07.442 CC test/app/histogram_perf/histogram_perf.o 00:04:07.442 CC test/env/pci/pci_ut.o 00:04:07.701 LINK spdk_lspci 00:04:07.701 CXX test/cpp_headers/blobfs.o 00:04:07.701 LINK histogram_perf 00:04:07.701 LINK thread 00:04:07.701 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:07.960 CC test/event/event_perf/event_perf.o 00:04:07.960 CC test/event/reactor/reactor.o 00:04:07.960 CXX test/cpp_headers/blob.o 00:04:07.960 LINK reactor 00:04:07.960 LINK event_perf 00:04:07.960 CC test/app/jsoncat/jsoncat.o 00:04:07.960 CXX test/cpp_headers/conf.o 00:04:07.960 LINK pci_ut 00:04:08.219 CXX test/cpp_headers/config.o 00:04:08.219 LINK jsoncat 00:04:08.219 CXX test/cpp_headers/cpuset.o 00:04:08.219 CC examples/sock/hello_world/hello_sock.o 00:04:08.219 CC test/event/reactor_perf/reactor_perf.o 00:04:08.219 CC test/event/app_repeat/app_repeat.o 00:04:08.486 LINK memory_ut 00:04:08.486 CXX test/cpp_headers/crc16.o 00:04:08.486 LINK reactor_perf 00:04:08.486 CC test/app/stub/stub.o 00:04:08.486 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:08.486 LINK spdk_nvme_perf 00:04:08.486 LINK app_repeat 00:04:08.486 LINK spdk_nvme_identify 00:04:08.486 LINK hello_sock 00:04:08.486 CXX test/cpp_headers/crc32.o 00:04:08.486 CXX test/cpp_headers/crc64.o 00:04:08.486 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:08.747 LINK stub 00:04:08.747 CXX test/cpp_headers/dif.o 00:04:08.747 CXX test/cpp_headers/dma.o 00:04:08.747 CC test/event/scheduler/scheduler.o 00:04:08.747 CC app/spdk_nvme_discover/discovery_aer.o 00:04:08.747 CC app/spdk_top/spdk_top.o 00:04:08.747 CC examples/vmd/lsvmd/lsvmd.o 00:04:08.747 CC test/nvme/aer/aer.o 00:04:09.006 CC test/nvme/reset/reset.o 00:04:09.006 CXX test/cpp_headers/endian.o 00:04:09.006 CC test/nvme/sgl/sgl.o 00:04:09.006 LINK lsvmd 00:04:09.006 LINK spdk_nvme_discover 00:04:09.006 LINK scheduler 00:04:09.006 LINK vhost_fuzz 00:04:09.006 CXX test/cpp_headers/env_dpdk.o 00:04:09.280 CXX test/cpp_headers/env.o 00:04:09.280 LINK aer 00:04:09.280 LINK reset 00:04:09.280 CXX test/cpp_headers/event.o 00:04:09.280 LINK sgl 00:04:09.281 CC examples/vmd/led/led.o 00:04:09.281 CC test/nvme/overhead/overhead.o 00:04:09.281 CC test/nvme/e2edp/nvme_dp.o 00:04:09.281 CXX test/cpp_headers/fd_group.o 00:04:09.281 LINK led 00:04:09.548 CC test/nvme/err_injection/err_injection.o 00:04:09.548 CC test/nvme/startup/startup.o 00:04:09.548 CXX test/cpp_headers/fd.o 00:04:09.548 CC test/blobfs/mkfs/mkfs.o 00:04:09.548 CC test/accel/dif/dif.o 00:04:09.548 LINK nvme_dp 00:04:09.548 LINK overhead 00:04:09.548 LINK startup 00:04:09.548 LINK err_injection 00:04:09.806 CXX test/cpp_headers/file.o 00:04:09.806 CC examples/idxd/perf/perf.o 00:04:09.806 LINK mkfs 00:04:09.806 CXX test/cpp_headers/fsdev.o 00:04:09.806 CXX test/cpp_headers/fsdev_module.o 00:04:09.806 CXX test/cpp_headers/ftl.o 00:04:09.806 LINK spdk_top 00:04:09.806 CC test/nvme/reserve/reserve.o 00:04:09.806 LINK iscsi_fuzz 00:04:10.065 CXX test/cpp_headers/fuse_dispatcher.o 00:04:10.065 CXX test/cpp_headers/gpt_spec.o 00:04:10.065 CC test/nvme/simple_copy/simple_copy.o 00:04:10.065 CC test/lvol/esnap/esnap.o 00:04:10.065 LINK reserve 00:04:10.065 LINK idxd_perf 00:04:10.065 CXX test/cpp_headers/hexlify.o 00:04:10.065 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:10.065 CC app/vhost/vhost.o 00:04:10.323 CC app/spdk_dd/spdk_dd.o 00:04:10.323 CC examples/accel/perf/accel_perf.o 00:04:10.323 CXX test/cpp_headers/histogram_data.o 00:04:10.323 LINK simple_copy 00:04:10.323 CC test/nvme/connect_stress/connect_stress.o 00:04:10.323 LINK vhost 00:04:10.323 CC test/nvme/boot_partition/boot_partition.o 00:04:10.323 LINK dif 00:04:10.582 CXX test/cpp_headers/idxd.o 00:04:10.582 LINK hello_fsdev 00:04:10.582 LINK boot_partition 00:04:10.582 LINK connect_stress 00:04:10.582 CXX test/cpp_headers/idxd_spec.o 00:04:10.582 CXX test/cpp_headers/init.o 00:04:10.582 CC test/nvme/compliance/nvme_compliance.o 00:04:10.582 LINK spdk_dd 00:04:10.840 CXX test/cpp_headers/ioat.o 00:04:10.840 CXX test/cpp_headers/ioat_spec.o 00:04:10.840 CXX test/cpp_headers/iscsi_spec.o 00:04:10.840 CXX test/cpp_headers/json.o 00:04:10.840 CC test/bdev/bdevio/bdevio.o 00:04:10.840 LINK accel_perf 00:04:10.840 CC examples/blob/hello_world/hello_blob.o 00:04:10.840 CXX test/cpp_headers/jsonrpc.o 00:04:11.099 LINK nvme_compliance 00:04:11.099 CC test/nvme/fused_ordering/fused_ordering.o 00:04:11.099 CC app/fio/nvme/fio_plugin.o 00:04:11.099 CC examples/blob/cli/blobcli.o 00:04:11.099 CXX test/cpp_headers/keyring.o 00:04:11.099 CC examples/nvme/hello_world/hello_world.o 00:04:11.099 LINK hello_blob 00:04:11.358 CXX test/cpp_headers/keyring_module.o 00:04:11.358 CC app/fio/bdev/fio_plugin.o 00:04:11.358 LINK fused_ordering 00:04:11.358 LINK bdevio 00:04:11.358 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:11.358 LINK hello_world 00:04:11.358 CXX test/cpp_headers/likely.o 00:04:11.358 CXX test/cpp_headers/log.o 00:04:11.358 CC test/nvme/fdp/fdp.o 00:04:11.358 LINK doorbell_aers 00:04:11.617 CXX test/cpp_headers/lvol.o 00:04:11.617 LINK blobcli 00:04:11.617 CC examples/nvme/reconnect/reconnect.o 00:04:11.617 LINK spdk_nvme 00:04:11.617 CXX test/cpp_headers/md5.o 00:04:11.617 CC test/nvme/cuse/cuse.o 00:04:11.617 CC examples/bdev/hello_world/hello_bdev.o 00:04:11.876 CC examples/bdev/bdevperf/bdevperf.o 00:04:11.876 LINK spdk_bdev 00:04:11.876 LINK fdp 00:04:11.877 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:11.877 CC examples/nvme/arbitration/arbitration.o 00:04:11.877 CXX test/cpp_headers/memory.o 00:04:11.877 CXX test/cpp_headers/mmio.o 00:04:11.877 LINK hello_bdev 00:04:12.136 LINK reconnect 00:04:12.136 CXX test/cpp_headers/nbd.o 00:04:12.136 CXX test/cpp_headers/net.o 00:04:12.136 CC examples/nvme/hotplug/hotplug.o 00:04:12.136 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:12.136 CXX test/cpp_headers/notify.o 00:04:12.136 CXX test/cpp_headers/nvme.o 00:04:12.136 LINK arbitration 00:04:12.406 CC examples/nvme/abort/abort.o 00:04:12.406 LINK cmb_copy 00:04:12.406 CXX test/cpp_headers/nvme_intel.o 00:04:12.406 LINK hotplug 00:04:12.406 CXX test/cpp_headers/nvme_ocssd.o 00:04:12.406 LINK nvme_manage 00:04:12.406 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:12.406 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:12.682 CXX test/cpp_headers/nvme_spec.o 00:04:12.682 CXX test/cpp_headers/nvme_zns.o 00:04:12.682 CXX test/cpp_headers/nvmf_cmd.o 00:04:12.682 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:12.682 LINK pmr_persistence 00:04:12.682 LINK abort 00:04:12.682 CXX test/cpp_headers/nvmf.o 00:04:12.682 LINK bdevperf 00:04:12.682 CXX test/cpp_headers/nvmf_spec.o 00:04:12.682 CXX test/cpp_headers/nvmf_transport.o 00:04:12.682 CXX test/cpp_headers/opal.o 00:04:12.682 CXX test/cpp_headers/opal_spec.o 00:04:12.682 CXX test/cpp_headers/pci_ids.o 00:04:12.941 CXX test/cpp_headers/pipe.o 00:04:12.941 CXX test/cpp_headers/queue.o 00:04:12.941 CXX test/cpp_headers/reduce.o 00:04:12.941 CXX test/cpp_headers/rpc.o 00:04:12.941 CXX test/cpp_headers/scheduler.o 00:04:12.941 CXX test/cpp_headers/scsi.o 00:04:12.941 CXX test/cpp_headers/scsi_spec.o 00:04:12.941 CXX test/cpp_headers/sock.o 00:04:12.941 CXX test/cpp_headers/stdinc.o 00:04:13.199 CXX test/cpp_headers/string.o 00:04:13.199 LINK cuse 00:04:13.199 CXX test/cpp_headers/thread.o 00:04:13.199 CXX test/cpp_headers/trace.o 00:04:13.199 CXX test/cpp_headers/trace_parser.o 00:04:13.199 CC examples/nvmf/nvmf/nvmf.o 00:04:13.199 CXX test/cpp_headers/tree.o 00:04:13.199 CXX test/cpp_headers/ublk.o 00:04:13.199 CXX test/cpp_headers/util.o 00:04:13.199 CXX test/cpp_headers/uuid.o 00:04:13.199 CXX test/cpp_headers/version.o 00:04:13.199 CXX test/cpp_headers/vfio_user_pci.o 00:04:13.199 CXX test/cpp_headers/vfio_user_spec.o 00:04:13.199 CXX test/cpp_headers/vhost.o 00:04:13.458 CXX test/cpp_headers/vmd.o 00:04:13.458 CXX test/cpp_headers/xor.o 00:04:13.458 CXX test/cpp_headers/zipf.o 00:04:13.458 LINK nvmf 00:04:15.989 LINK esnap 00:04:16.248 00:04:16.248 real 1m18.166s 00:04:16.248 user 6m2.847s 00:04:16.248 sys 1m7.378s 00:04:16.248 23:44:10 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:16.248 23:44:10 make -- common/autotest_common.sh@10 -- $ set +x 00:04:16.248 ************************************ 00:04:16.248 END TEST make 00:04:16.248 ************************************ 00:04:16.507 23:44:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:16.507 23:44:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:16.507 23:44:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:16.507 23:44:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.507 23:44:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:16.507 23:44:10 -- pm/common@44 -- $ pid=6218 00:04:16.507 23:44:10 -- pm/common@50 -- $ kill -TERM 6218 00:04:16.507 23:44:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.507 23:44:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:16.507 23:44:10 -- pm/common@44 -- $ pid=6220 00:04:16.507 23:44:10 -- pm/common@50 -- $ kill -TERM 6220 00:04:16.507 23:44:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:16.507 23:44:10 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:16.507 23:44:10 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:16.507 23:44:10 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:16.507 23:44:10 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.507 23:44:10 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.507 23:44:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.507 23:44:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.507 23:44:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.507 23:44:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.507 23:44:10 -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.507 23:44:10 -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.507 23:44:10 -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.507 23:44:10 -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.507 23:44:10 -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.507 23:44:10 -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.507 23:44:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.507 23:44:10 -- scripts/common.sh@344 -- # case "$op" in 00:04:16.507 23:44:10 -- scripts/common.sh@345 -- # : 1 00:04:16.507 23:44:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.507 23:44:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.507 23:44:10 -- scripts/common.sh@365 -- # decimal 1 00:04:16.767 23:44:10 -- scripts/common.sh@353 -- # local d=1 00:04:16.767 23:44:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.767 23:44:10 -- scripts/common.sh@355 -- # echo 1 00:04:16.767 23:44:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.767 23:44:10 -- scripts/common.sh@366 -- # decimal 2 00:04:16.767 23:44:10 -- scripts/common.sh@353 -- # local d=2 00:04:16.767 23:44:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.767 23:44:10 -- scripts/common.sh@355 -- # echo 2 00:04:16.767 23:44:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.767 23:44:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.767 23:44:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.767 23:44:10 -- scripts/common.sh@368 -- # return 0 00:04:16.767 23:44:10 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.767 23:44:10 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.767 --rc genhtml_branch_coverage=1 00:04:16.767 --rc genhtml_function_coverage=1 00:04:16.767 --rc genhtml_legend=1 00:04:16.767 --rc geninfo_all_blocks=1 00:04:16.767 --rc geninfo_unexecuted_blocks=1 00:04:16.767 00:04:16.767 ' 00:04:16.767 23:44:10 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.767 --rc genhtml_branch_coverage=1 00:04:16.767 --rc genhtml_function_coverage=1 00:04:16.767 --rc genhtml_legend=1 00:04:16.767 --rc geninfo_all_blocks=1 00:04:16.767 --rc geninfo_unexecuted_blocks=1 00:04:16.767 00:04:16.767 ' 00:04:16.767 23:44:10 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.767 --rc genhtml_branch_coverage=1 00:04:16.767 --rc genhtml_function_coverage=1 00:04:16.767 --rc genhtml_legend=1 00:04:16.767 --rc geninfo_all_blocks=1 00:04:16.767 --rc geninfo_unexecuted_blocks=1 00:04:16.767 00:04:16.767 ' 00:04:16.767 23:44:10 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.767 --rc genhtml_branch_coverage=1 00:04:16.767 --rc genhtml_function_coverage=1 00:04:16.767 --rc genhtml_legend=1 00:04:16.767 --rc geninfo_all_blocks=1 00:04:16.767 --rc geninfo_unexecuted_blocks=1 00:04:16.767 00:04:16.767 ' 00:04:16.767 23:44:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:16.767 23:44:10 -- nvmf/common.sh@7 -- # uname -s 00:04:16.767 23:44:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:16.767 23:44:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:16.767 23:44:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:16.767 23:44:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:16.767 23:44:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:16.767 23:44:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:16.767 23:44:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:16.767 23:44:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:16.767 23:44:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:16.767 23:44:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:16.767 23:44:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0135e85e-f94b-4193-aab3-cb65764a45eb 00:04:16.767 23:44:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=0135e85e-f94b-4193-aab3-cb65764a45eb 00:04:16.767 23:44:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:16.767 23:44:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:16.767 23:44:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:16.767 23:44:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:16.767 23:44:10 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:16.767 23:44:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:16.767 23:44:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:16.767 23:44:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.767 23:44:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.767 23:44:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.767 23:44:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.768 23:44:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.768 23:44:10 -- paths/export.sh@5 -- # export PATH 00:04:16.768 23:44:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.768 23:44:10 -- nvmf/common.sh@51 -- # : 0 00:04:16.768 23:44:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:16.768 23:44:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:16.768 23:44:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.768 23:44:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.768 23:44:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.768 23:44:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:16.768 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:16.768 23:44:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:16.768 23:44:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:16.768 23:44:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:16.768 23:44:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:16.768 23:44:10 -- spdk/autotest.sh@32 -- # uname -s 00:04:16.768 23:44:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:16.768 23:44:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:16.768 23:44:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:16.768 23:44:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:16.768 23:44:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:16.768 23:44:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:16.768 23:44:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:16.768 23:44:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:16.768 23:44:10 -- spdk/autotest.sh@48 -- # udevadm_pid=66562 00:04:16.768 23:44:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:16.768 23:44:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:16.768 23:44:10 -- pm/common@17 -- # local monitor 00:04:16.768 23:44:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.768 23:44:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.768 23:44:10 -- pm/common@25 -- # sleep 1 00:04:16.768 23:44:10 -- pm/common@21 -- # date +%s 00:04:16.768 23:44:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730591050 00:04:16.768 23:44:10 -- pm/common@21 -- # date +%s 00:04:16.768 23:44:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730591050 00:04:16.768 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730591050_collect-cpu-load.pm.log 00:04:16.768 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730591050_collect-vmstat.pm.log 00:04:17.705 23:44:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:17.705 23:44:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:17.705 23:44:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.705 23:44:11 -- common/autotest_common.sh@10 -- # set +x 00:04:17.705 23:44:11 -- spdk/autotest.sh@59 -- # create_test_list 00:04:17.705 23:44:11 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:17.705 23:44:11 -- common/autotest_common.sh@10 -- # set +x 00:04:17.964 23:44:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:17.964 23:44:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:17.964 23:44:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:17.964 23:44:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:17.964 23:44:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:17.964 23:44:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:17.964 23:44:11 -- common/autotest_common.sh@1455 -- # uname 00:04:17.964 23:44:11 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:17.964 23:44:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:17.964 23:44:11 -- common/autotest_common.sh@1475 -- # uname 00:04:17.964 23:44:11 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:17.964 23:44:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:17.964 23:44:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:17.964 lcov: LCOV version 1.15 00:04:17.964 23:44:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:32.849 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:32.849 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:47.769 23:44:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:47.769 23:44:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.769 23:44:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.769 23:44:40 -- spdk/autotest.sh@78 -- # rm -f 00:04:47.769 23:44:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.769 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.769 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:47.769 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:47.769 23:44:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:47.769 23:44:41 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:47.769 23:44:41 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:47.769 23:44:41 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:47.769 23:44:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:47.769 23:44:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:47.769 23:44:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:47.769 23:44:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.769 23:44:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:47.769 23:44:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:47.769 23:44:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:47.769 23:44:41 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:47.769 23:44:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:47.769 23:44:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:47.769 23:44:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:47.769 23:44:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:47.769 23:44:41 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:47.769 23:44:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:47.769 23:44:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:47.769 23:44:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:47.769 23:44:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:47.769 23:44:41 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:47.769 23:44:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:47.769 23:44:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:47.769 23:44:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:47.769 23:44:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.769 23:44:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.769 23:44:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:47.769 23:44:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:47.769 23:44:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:47.769 No valid GPT data, bailing 00:04:47.769 23:44:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:47.769 23:44:41 -- scripts/common.sh@394 -- # pt= 00:04:47.769 23:44:41 -- scripts/common.sh@395 -- # return 1 00:04:47.769 23:44:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:47.769 1+0 records in 00:04:47.769 1+0 records out 00:04:47.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639203 s, 164 MB/s 00:04:47.769 23:44:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.769 23:44:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.769 23:44:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:47.769 23:44:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:47.769 23:44:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:47.769 No valid GPT data, bailing 00:04:47.769 23:44:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:47.769 23:44:41 -- scripts/common.sh@394 -- # pt= 00:04:47.769 23:44:41 -- scripts/common.sh@395 -- # return 1 00:04:47.769 23:44:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:47.769 1+0 records in 00:04:47.769 1+0 records out 00:04:47.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634512 s, 165 MB/s 00:04:47.769 23:44:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.769 23:44:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.769 23:44:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:47.769 23:44:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:47.769 23:44:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:47.769 No valid GPT data, bailing 00:04:47.769 23:44:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:47.769 23:44:41 -- scripts/common.sh@394 -- # pt= 00:04:47.769 23:44:41 -- scripts/common.sh@395 -- # return 1 00:04:47.769 23:44:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:47.769 1+0 records in 00:04:47.769 1+0 records out 00:04:47.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00664272 s, 158 MB/s 00:04:47.769 23:44:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.769 23:44:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.769 23:44:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:47.769 23:44:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:47.769 23:44:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:47.769 No valid GPT data, bailing 00:04:47.769 23:44:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:47.769 23:44:41 -- scripts/common.sh@394 -- # pt= 00:04:47.769 23:44:41 -- scripts/common.sh@395 -- # return 1 00:04:47.769 23:44:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:47.769 1+0 records in 00:04:47.769 1+0 records out 00:04:47.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00606482 s, 173 MB/s 00:04:47.769 23:44:41 -- spdk/autotest.sh@105 -- # sync 00:04:47.769 23:44:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:47.769 23:44:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:47.769 23:44:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:51.072 23:44:44 -- spdk/autotest.sh@111 -- # uname -s 00:04:51.072 23:44:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:51.072 23:44:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:51.072 23:44:44 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:51.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.332 Hugepages 00:04:51.332 node hugesize free / total 00:04:51.332 node0 1048576kB 0 / 0 00:04:51.332 node0 2048kB 0 / 0 00:04:51.332 00:04:51.332 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:51.602 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:51.602 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:51.868 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:51.868 23:44:45 -- spdk/autotest.sh@117 -- # uname -s 00:04:51.868 23:44:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:51.868 23:44:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:51.868 23:44:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.441 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.699 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.699 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:52.699 23:44:46 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:54.085 23:44:47 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:54.085 23:44:47 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:54.085 23:44:47 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:54.085 23:44:47 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:54.085 23:44:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:54.085 23:44:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:54.085 23:44:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.085 23:44:47 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.085 23:44:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:54.085 23:44:47 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:54.085 23:44:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:54.085 23:44:47 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.348 Waiting for block devices as requested 00:04:54.349 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:54.608 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:54.608 23:44:48 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:54.608 23:44:48 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:54.608 23:44:48 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:54.608 23:44:48 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:54.608 23:44:48 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:54.608 23:44:48 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:54.608 23:44:48 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:54.608 23:44:48 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:54.608 23:44:48 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:54.608 23:44:48 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:54.608 23:44:48 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:54.608 23:44:48 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:54.608 23:44:48 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:54.608 23:44:48 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:54.608 23:44:48 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:54.608 23:44:48 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:54.608 23:44:48 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:54.608 23:44:48 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:54.608 23:44:48 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:54.608 23:44:48 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:54.608 23:44:48 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:54.608 23:44:48 -- common/autotest_common.sh@1541 -- # continue 00:04:54.608 23:44:48 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:54.608 23:44:48 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:54.608 23:44:48 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:54.608 23:44:48 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:54.608 23:44:48 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:54.608 23:44:48 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:54.608 23:44:48 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:54.608 23:44:48 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:54.608 23:44:48 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:54.608 23:44:48 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:54.608 23:44:48 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:54.608 23:44:48 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:54.608 23:44:48 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:54.608 23:44:48 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:54.608 23:44:48 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:54.608 23:44:48 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:54.608 23:44:48 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:54.608 23:44:48 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:54.608 23:44:48 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:54.608 23:44:48 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:54.608 23:44:48 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:54.608 23:44:48 -- common/autotest_common.sh@1541 -- # continue 00:04:54.608 23:44:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:54.608 23:44:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.608 23:44:48 -- common/autotest_common.sh@10 -- # set +x 00:04:54.868 23:44:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:54.868 23:44:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.868 23:44:48 -- common/autotest_common.sh@10 -- # set +x 00:04:54.868 23:44:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.839 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.839 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.839 23:44:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:55.839 23:44:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.839 23:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:55.839 23:44:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:55.839 23:44:49 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:55.839 23:44:49 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:55.839 23:44:49 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:55.839 23:44:49 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:55.839 23:44:49 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:55.839 23:44:49 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:55.839 23:44:49 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:55.839 23:44:49 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:55.839 23:44:49 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:55.839 23:44:49 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.839 23:44:49 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:55.839 23:44:49 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:55.839 23:44:49 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:55.839 23:44:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:55.839 23:44:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:55.839 23:44:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:55.839 23:44:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:55.839 23:44:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:55.839 23:44:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:55.839 23:44:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:55.839 23:44:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:55.839 23:44:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:55.839 23:44:49 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:55.839 23:44:49 -- common/autotest_common.sh@1570 -- # return 0 00:04:55.839 23:44:49 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:55.839 23:44:49 -- common/autotest_common.sh@1578 -- # return 0 00:04:55.839 23:44:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:55.840 23:44:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:55.840 23:44:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:55.840 23:44:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:55.840 23:44:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:55.840 23:44:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.840 23:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:55.840 23:44:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:55.840 23:44:49 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:55.840 23:44:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.840 23:44:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.840 23:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:55.840 ************************************ 00:04:55.840 START TEST env 00:04:55.840 ************************************ 00:04:55.840 23:44:49 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:56.103 * Looking for test storage... 00:04:56.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.103 23:44:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.103 23:44:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.103 23:44:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.103 23:44:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.103 23:44:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.103 23:44:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.103 23:44:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.103 23:44:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.103 23:44:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.103 23:44:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.103 23:44:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.103 23:44:50 env -- scripts/common.sh@344 -- # case "$op" in 00:04:56.103 23:44:50 env -- scripts/common.sh@345 -- # : 1 00:04:56.103 23:44:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.103 23:44:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.103 23:44:50 env -- scripts/common.sh@365 -- # decimal 1 00:04:56.103 23:44:50 env -- scripts/common.sh@353 -- # local d=1 00:04:56.103 23:44:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.103 23:44:50 env -- scripts/common.sh@355 -- # echo 1 00:04:56.103 23:44:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.103 23:44:50 env -- scripts/common.sh@366 -- # decimal 2 00:04:56.103 23:44:50 env -- scripts/common.sh@353 -- # local d=2 00:04:56.103 23:44:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.103 23:44:50 env -- scripts/common.sh@355 -- # echo 2 00:04:56.103 23:44:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.103 23:44:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.103 23:44:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.103 23:44:50 env -- scripts/common.sh@368 -- # return 0 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:56.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.103 --rc genhtml_branch_coverage=1 00:04:56.103 --rc genhtml_function_coverage=1 00:04:56.103 --rc genhtml_legend=1 00:04:56.103 --rc geninfo_all_blocks=1 00:04:56.103 --rc geninfo_unexecuted_blocks=1 00:04:56.103 00:04:56.103 ' 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:56.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.103 --rc genhtml_branch_coverage=1 00:04:56.103 --rc genhtml_function_coverage=1 00:04:56.103 --rc genhtml_legend=1 00:04:56.103 --rc geninfo_all_blocks=1 00:04:56.103 --rc geninfo_unexecuted_blocks=1 00:04:56.103 00:04:56.103 ' 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:56.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.103 --rc genhtml_branch_coverage=1 00:04:56.103 --rc genhtml_function_coverage=1 00:04:56.103 --rc genhtml_legend=1 00:04:56.103 --rc geninfo_all_blocks=1 00:04:56.103 --rc geninfo_unexecuted_blocks=1 00:04:56.103 00:04:56.103 ' 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:56.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.103 --rc genhtml_branch_coverage=1 00:04:56.103 --rc genhtml_function_coverage=1 00:04:56.103 --rc genhtml_legend=1 00:04:56.103 --rc geninfo_all_blocks=1 00:04:56.103 --rc geninfo_unexecuted_blocks=1 00:04:56.103 00:04:56.103 ' 00:04:56.103 23:44:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.103 23:44:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.103 23:44:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.103 ************************************ 00:04:56.103 START TEST env_memory 00:04:56.103 ************************************ 00:04:56.103 23:44:50 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:56.103 00:04:56.103 00:04:56.103 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.103 http://cunit.sourceforge.net/ 00:04:56.103 00:04:56.103 00:04:56.103 Suite: memory 00:04:56.395 Test: alloc and free memory map ...[2024-11-02 23:44:50.213024] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:56.395 passed 00:04:56.395 Test: mem map translation ...[2024-11-02 23:44:50.255533] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:56.395 [2024-11-02 23:44:50.255593] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:56.395 [2024-11-02 23:44:50.255655] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:56.395 [2024-11-02 23:44:50.255678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:56.395 passed 00:04:56.395 Test: mem map registration ...[2024-11-02 23:44:50.320882] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:56.395 [2024-11-02 23:44:50.320951] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:56.395 passed 00:04:56.395 Test: mem map adjacent registrations ...passed 00:04:56.395 00:04:56.395 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.395 suites 1 1 n/a 0 0 00:04:56.395 tests 4 4 4 0 0 00:04:56.395 asserts 152 152 152 0 n/a 00:04:56.395 00:04:56.395 Elapsed time = 0.238 seconds 00:04:56.395 00:04:56.395 real 0m0.291s 00:04:56.395 user 0m0.252s 00:04:56.395 sys 0m0.028s 00:04:56.395 23:44:50 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.395 23:44:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:56.395 ************************************ 00:04:56.395 END TEST env_memory 00:04:56.395 ************************************ 00:04:56.395 23:44:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:56.395 23:44:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.395 23:44:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.395 23:44:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.656 ************************************ 00:04:56.656 START TEST env_vtophys 00:04:56.656 ************************************ 00:04:56.656 23:44:50 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:56.656 EAL: lib.eal log level changed from notice to debug 00:04:56.656 EAL: Detected lcore 0 as core 0 on socket 0 00:04:56.656 EAL: Detected lcore 1 as core 0 on socket 0 00:04:56.656 EAL: Detected lcore 2 as core 0 on socket 0 00:04:56.656 EAL: Detected lcore 3 as core 0 on socket 0 00:04:56.656 EAL: Detected lcore 4 as core 0 on socket 0 00:04:56.656 EAL: Detected lcore 5 as core 0 on socket 0 00:04:56.656 EAL: Detected lcore 6 as core 0 on socket 0 00:04:56.656 EAL: Detected lcore 7 as core 0 on socket 0 00:04:56.656 EAL: Detected lcore 8 as core 0 on socket 0 00:04:56.656 EAL: Detected lcore 9 as core 0 on socket 0 00:04:56.656 EAL: Maximum logical cores by configuration: 128 00:04:56.656 EAL: Detected CPU lcores: 10 00:04:56.656 EAL: Detected NUMA nodes: 1 00:04:56.656 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:56.656 EAL: Detected shared linkage of DPDK 00:04:56.656 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:56.656 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:56.656 EAL: Registered [vdev] bus. 00:04:56.656 EAL: bus.vdev log level changed from disabled to notice 00:04:56.657 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:56.657 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:56.657 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:56.657 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:56.657 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:56.657 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:56.657 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:56.657 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:56.657 EAL: No shared files mode enabled, IPC will be disabled 00:04:56.657 EAL: No shared files mode enabled, IPC is disabled 00:04:56.657 EAL: Selected IOVA mode 'PA' 00:04:56.657 EAL: Probing VFIO support... 00:04:56.657 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:56.657 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:56.657 EAL: Ask a virtual area of 0x2e000 bytes 00:04:56.657 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:56.657 EAL: Setting up physically contiguous memory... 00:04:56.657 EAL: Setting maximum number of open files to 524288 00:04:56.657 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:56.657 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:56.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.657 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:56.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.657 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:56.657 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:56.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.657 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:56.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.657 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:56.657 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:56.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.657 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:56.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.657 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:56.657 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:56.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.657 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:56.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.658 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.658 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:56.658 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:56.658 EAL: Hugepages will be freed exactly as allocated. 00:04:56.658 EAL: No shared files mode enabled, IPC is disabled 00:04:56.658 EAL: No shared files mode enabled, IPC is disabled 00:04:56.658 EAL: TSC frequency is ~2290000 KHz 00:04:56.658 EAL: Main lcore 0 is ready (tid=7f669264da40;cpuset=[0]) 00:04:56.658 EAL: Trying to obtain current memory policy. 00:04:56.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.658 EAL: Restoring previous memory policy: 0 00:04:56.658 EAL: request: mp_malloc_sync 00:04:56.658 EAL: No shared files mode enabled, IPC is disabled 00:04:56.658 EAL: Heap on socket 0 was expanded by 2MB 00:04:56.658 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:56.658 EAL: No shared files mode enabled, IPC is disabled 00:04:56.658 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:56.658 EAL: Mem event callback 'spdk:(nil)' registered 00:04:56.658 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:56.658 00:04:56.658 00:04:56.658 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.658 http://cunit.sourceforge.net/ 00:04:56.658 00:04:56.658 00:04:56.658 Suite: components_suite 00:04:57.228 Test: vtophys_malloc_test ...passed 00:04:57.228 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:57.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.228 EAL: Restoring previous memory policy: 4 00:04:57.228 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.228 EAL: request: mp_malloc_sync 00:04:57.228 EAL: No shared files mode enabled, IPC is disabled 00:04:57.228 EAL: Heap on socket 0 was expanded by 4MB 00:04:57.228 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.228 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was shrunk by 4MB 00:04:57.229 EAL: Trying to obtain current memory policy. 00:04:57.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.229 EAL: Restoring previous memory policy: 4 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was expanded by 6MB 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was shrunk by 6MB 00:04:57.229 EAL: Trying to obtain current memory policy. 00:04:57.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.229 EAL: Restoring previous memory policy: 4 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was expanded by 10MB 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was shrunk by 10MB 00:04:57.229 EAL: Trying to obtain current memory policy. 00:04:57.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.229 EAL: Restoring previous memory policy: 4 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was expanded by 18MB 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was shrunk by 18MB 00:04:57.229 EAL: Trying to obtain current memory policy. 00:04:57.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.229 EAL: Restoring previous memory policy: 4 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was expanded by 34MB 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was shrunk by 34MB 00:04:57.229 EAL: Trying to obtain current memory policy. 00:04:57.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.229 EAL: Restoring previous memory policy: 4 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was expanded by 66MB 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was shrunk by 66MB 00:04:57.229 EAL: Trying to obtain current memory policy. 00:04:57.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.229 EAL: Restoring previous memory policy: 4 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was expanded by 130MB 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was shrunk by 130MB 00:04:57.229 EAL: Trying to obtain current memory policy. 00:04:57.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.229 EAL: Restoring previous memory policy: 4 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was expanded by 258MB 00:04:57.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.229 EAL: request: mp_malloc_sync 00:04:57.229 EAL: No shared files mode enabled, IPC is disabled 00:04:57.229 EAL: Heap on socket 0 was shrunk by 258MB 00:04:57.229 EAL: Trying to obtain current memory policy. 00:04:57.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.489 EAL: Restoring previous memory policy: 4 00:04:57.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.489 EAL: request: mp_malloc_sync 00:04:57.489 EAL: No shared files mode enabled, IPC is disabled 00:04:57.489 EAL: Heap on socket 0 was expanded by 514MB 00:04:57.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.489 EAL: request: mp_malloc_sync 00:04:57.489 EAL: No shared files mode enabled, IPC is disabled 00:04:57.489 EAL: Heap on socket 0 was shrunk by 514MB 00:04:57.489 EAL: Trying to obtain current memory policy. 00:04:57.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.748 EAL: Restoring previous memory policy: 4 00:04:57.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.748 EAL: request: mp_malloc_sync 00:04:57.748 EAL: No shared files mode enabled, IPC is disabled 00:04:57.748 EAL: Heap on socket 0 was expanded by 1026MB 00:04:58.007 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.007 passed 00:04:58.007 00:04:58.007 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.007 suites 1 1 n/a 0 0 00:04:58.007 tests 2 2 2 0 0 00:04:58.007 asserts 5533 5533 5533 0 n/a 00:04:58.007 00:04:58.007 Elapsed time = 1.334 seconds 00:04:58.007 EAL: request: mp_malloc_sync 00:04:58.007 EAL: No shared files mode enabled, IPC is disabled 00:04:58.007 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:58.007 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.007 EAL: request: mp_malloc_sync 00:04:58.007 EAL: No shared files mode enabled, IPC is disabled 00:04:58.007 EAL: Heap on socket 0 was shrunk by 2MB 00:04:58.007 EAL: No shared files mode enabled, IPC is disabled 00:04:58.007 EAL: No shared files mode enabled, IPC is disabled 00:04:58.007 EAL: No shared files mode enabled, IPC is disabled 00:04:58.007 00:04:58.007 real 0m1.571s 00:04:58.007 user 0m0.772s 00:04:58.007 sys 0m0.670s 00:04:58.007 23:44:52 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.007 23:44:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:58.007 ************************************ 00:04:58.007 END TEST env_vtophys 00:04:58.007 ************************************ 00:04:58.267 23:44:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:58.267 23:44:52 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.267 23:44:52 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.267 23:44:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 ************************************ 00:04:58.267 START TEST env_pci 00:04:58.267 ************************************ 00:04:58.267 23:44:52 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:58.267 00:04:58.267 00:04:58.267 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.267 http://cunit.sourceforge.net/ 00:04:58.267 00:04:58.267 00:04:58.267 Suite: pci 00:04:58.267 Test: pci_hook ...[2024-11-02 23:44:52.165563] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68793 has claimed it 00:04:58.267 EAL: Cannot find device (10000:00:01.0) 00:04:58.267 EAL: Failed to attach device on primary process 00:04:58.267 passed 00:04:58.267 00:04:58.267 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.267 suites 1 1 n/a 0 0 00:04:58.267 tests 1 1 1 0 0 00:04:58.267 asserts 25 25 25 0 n/a 00:04:58.267 00:04:58.267 Elapsed time = 0.009 seconds 00:04:58.267 00:04:58.267 real 0m0.096s 00:04:58.267 user 0m0.045s 00:04:58.267 sys 0m0.049s 00:04:58.267 ************************************ 00:04:58.267 END TEST env_pci 00:04:58.267 ************************************ 00:04:58.267 23:44:52 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.267 23:44:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 23:44:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:58.267 23:44:52 env -- env/env.sh@15 -- # uname 00:04:58.267 23:44:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:58.267 23:44:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:58.267 23:44:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.267 23:44:52 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:58.267 23:44:52 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.267 23:44:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 ************************************ 00:04:58.267 START TEST env_dpdk_post_init 00:04:58.267 ************************************ 00:04:58.267 23:44:52 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.267 EAL: Detected CPU lcores: 10 00:04:58.267 EAL: Detected NUMA nodes: 1 00:04:58.267 EAL: Detected shared linkage of DPDK 00:04:58.267 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.267 EAL: Selected IOVA mode 'PA' 00:04:58.526 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.526 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:58.526 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:58.526 Starting DPDK initialization... 00:04:58.526 Starting SPDK post initialization... 00:04:58.526 SPDK NVMe probe 00:04:58.526 Attaching to 0000:00:10.0 00:04:58.526 Attaching to 0000:00:11.0 00:04:58.526 Attached to 0000:00:10.0 00:04:58.526 Attached to 0000:00:11.0 00:04:58.526 Cleaning up... 00:04:58.526 00:04:58.526 real 0m0.240s 00:04:58.526 user 0m0.070s 00:04:58.526 sys 0m0.071s 00:04:58.526 23:44:52 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.526 23:44:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.526 ************************************ 00:04:58.526 END TEST env_dpdk_post_init 00:04:58.526 ************************************ 00:04:58.526 23:44:52 env -- env/env.sh@26 -- # uname 00:04:58.526 23:44:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:58.526 23:44:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.526 23:44:52 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.526 23:44:52 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.526 23:44:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.526 ************************************ 00:04:58.526 START TEST env_mem_callbacks 00:04:58.526 ************************************ 00:04:58.526 23:44:52 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.786 EAL: Detected CPU lcores: 10 00:04:58.786 EAL: Detected NUMA nodes: 1 00:04:58.786 EAL: Detected shared linkage of DPDK 00:04:58.786 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.786 EAL: Selected IOVA mode 'PA' 00:04:58.786 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.786 00:04:58.786 00:04:58.786 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.786 http://cunit.sourceforge.net/ 00:04:58.786 00:04:58.786 00:04:58.786 Suite: memory 00:04:58.786 Test: test ... 00:04:58.786 register 0x200000200000 2097152 00:04:58.786 malloc 3145728 00:04:58.786 register 0x200000400000 4194304 00:04:58.786 buf 0x200000500000 len 3145728 PASSED 00:04:58.786 malloc 64 00:04:58.786 buf 0x2000004fff40 len 64 PASSED 00:04:58.786 malloc 4194304 00:04:58.786 register 0x200000800000 6291456 00:04:58.786 buf 0x200000a00000 len 4194304 PASSED 00:04:58.786 free 0x200000500000 3145728 00:04:58.786 free 0x2000004fff40 64 00:04:58.786 unregister 0x200000400000 4194304 PASSED 00:04:58.786 free 0x200000a00000 4194304 00:04:58.786 unregister 0x200000800000 6291456 PASSED 00:04:58.786 malloc 8388608 00:04:58.786 register 0x200000400000 10485760 00:04:58.786 buf 0x200000600000 len 8388608 PASSED 00:04:58.786 free 0x200000600000 8388608 00:04:58.786 unregister 0x200000400000 10485760 PASSED 00:04:58.786 passed 00:04:58.786 00:04:58.786 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.786 suites 1 1 n/a 0 0 00:04:58.786 tests 1 1 1 0 0 00:04:58.786 asserts 15 15 15 0 n/a 00:04:58.786 00:04:58.786 Elapsed time = 0.012 seconds 00:04:58.786 00:04:58.786 real 0m0.183s 00:04:58.786 user 0m0.032s 00:04:58.786 sys 0m0.049s 00:04:58.786 23:44:52 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.786 23:44:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:58.786 ************************************ 00:04:58.786 END TEST env_mem_callbacks 00:04:58.786 ************************************ 00:04:58.786 00:04:58.786 real 0m2.938s 00:04:58.786 user 0m1.385s 00:04:58.786 sys 0m1.228s 00:04:58.786 23:44:52 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.786 23:44:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.786 ************************************ 00:04:58.786 END TEST env 00:04:58.786 ************************************ 00:04:59.046 23:44:52 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:59.046 23:44:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.046 23:44:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.046 23:44:52 -- common/autotest_common.sh@10 -- # set +x 00:04:59.046 ************************************ 00:04:59.046 START TEST rpc 00:04:59.046 ************************************ 00:04:59.046 23:44:52 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:59.046 * Looking for test storage... 00:04:59.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:59.046 23:44:53 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.046 23:44:53 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.046 23:44:53 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.046 23:44:53 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.046 23:44:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.046 23:44:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.046 23:44:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.046 23:44:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.046 23:44:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.046 23:44:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.046 23:44:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.046 23:44:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.046 23:44:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.046 23:44:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.046 23:44:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.046 23:44:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.046 23:44:53 rpc -- scripts/common.sh@345 -- # : 1 00:04:59.046 23:44:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.046 23:44:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.046 23:44:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.046 23:44:53 rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.046 23:44:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.046 23:44:53 rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.046 23:44:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.046 23:44:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.046 23:44:53 rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.046 23:44:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.046 23:44:53 rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.046 23:44:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.046 23:44:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.046 23:44:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.046 23:44:53 rpc -- scripts/common.sh@368 -- # return 0 00:04:59.046 23:44:53 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.046 23:44:53 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.047 --rc genhtml_branch_coverage=1 00:04:59.047 --rc genhtml_function_coverage=1 00:04:59.047 --rc genhtml_legend=1 00:04:59.047 --rc geninfo_all_blocks=1 00:04:59.047 --rc geninfo_unexecuted_blocks=1 00:04:59.047 00:04:59.047 ' 00:04:59.047 23:44:53 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.047 --rc genhtml_branch_coverage=1 00:04:59.047 --rc genhtml_function_coverage=1 00:04:59.047 --rc genhtml_legend=1 00:04:59.047 --rc geninfo_all_blocks=1 00:04:59.047 --rc geninfo_unexecuted_blocks=1 00:04:59.047 00:04:59.047 ' 00:04:59.047 23:44:53 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.047 --rc genhtml_branch_coverage=1 00:04:59.047 --rc genhtml_function_coverage=1 00:04:59.047 --rc genhtml_legend=1 00:04:59.047 --rc geninfo_all_blocks=1 00:04:59.047 --rc geninfo_unexecuted_blocks=1 00:04:59.047 00:04:59.047 ' 00:04:59.047 23:44:53 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.047 --rc genhtml_branch_coverage=1 00:04:59.047 --rc genhtml_function_coverage=1 00:04:59.047 --rc genhtml_legend=1 00:04:59.047 --rc geninfo_all_blocks=1 00:04:59.047 --rc geninfo_unexecuted_blocks=1 00:04:59.047 00:04:59.047 ' 00:04:59.047 23:44:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68920 00:04:59.047 23:44:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:59.047 23:44:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.047 23:44:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68920 00:04:59.047 23:44:53 rpc -- common/autotest_common.sh@833 -- # '[' -z 68920 ']' 00:04:59.047 23:44:53 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.047 23:44:53 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.047 23:44:53 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.047 23:44:53 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.047 23:44:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.307 [2024-11-02 23:44:53.230055] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:04:59.307 [2024-11-02 23:44:53.230201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68920 ] 00:04:59.307 [2024-11-02 23:44:53.384269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.566 [2024-11-02 23:44:53.413263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:59.566 [2024-11-02 23:44:53.413336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68920' to capture a snapshot of events at runtime. 00:04:59.566 [2024-11-02 23:44:53.413348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.566 [2024-11-02 23:44:53.413356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.566 [2024-11-02 23:44:53.413380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68920 for offline analysis/debug. 00:04:59.566 [2024-11-02 23:44:53.413752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.136 23:44:54 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.136 23:44:54 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:00.136 23:44:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.136 23:44:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.136 23:44:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:00.136 23:44:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:00.136 23:44:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.136 23:44:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.136 23:44:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.136 ************************************ 00:05:00.136 START TEST rpc_integrity 00:05:00.136 ************************************ 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.136 { 00:05:00.136 "name": "Malloc0", 00:05:00.136 "aliases": [ 00:05:00.136 "cdf86e52-2629-4da9-85b9-2da936ff684a" 00:05:00.136 ], 00:05:00.136 "product_name": "Malloc disk", 00:05:00.136 "block_size": 512, 00:05:00.136 "num_blocks": 16384, 00:05:00.136 "uuid": "cdf86e52-2629-4da9-85b9-2da936ff684a", 00:05:00.136 "assigned_rate_limits": { 00:05:00.136 "rw_ios_per_sec": 0, 00:05:00.136 "rw_mbytes_per_sec": 0, 00:05:00.136 "r_mbytes_per_sec": 0, 00:05:00.136 "w_mbytes_per_sec": 0 00:05:00.136 }, 00:05:00.136 "claimed": false, 00:05:00.136 "zoned": false, 00:05:00.136 "supported_io_types": { 00:05:00.136 "read": true, 00:05:00.136 "write": true, 00:05:00.136 "unmap": true, 00:05:00.136 "flush": true, 00:05:00.136 "reset": true, 00:05:00.136 "nvme_admin": false, 00:05:00.136 "nvme_io": false, 00:05:00.136 "nvme_io_md": false, 00:05:00.136 "write_zeroes": true, 00:05:00.136 "zcopy": true, 00:05:00.136 "get_zone_info": false, 00:05:00.136 "zone_management": false, 00:05:00.136 "zone_append": false, 00:05:00.136 "compare": false, 00:05:00.136 "compare_and_write": false, 00:05:00.136 "abort": true, 00:05:00.136 "seek_hole": false, 00:05:00.136 "seek_data": false, 00:05:00.136 "copy": true, 00:05:00.136 "nvme_iov_md": false 00:05:00.136 }, 00:05:00.136 "memory_domains": [ 00:05:00.136 { 00:05:00.136 "dma_device_id": "system", 00:05:00.136 "dma_device_type": 1 00:05:00.136 }, 00:05:00.136 { 00:05:00.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.136 "dma_device_type": 2 00:05:00.136 } 00:05:00.136 ], 00:05:00.136 "driver_specific": {} 00:05:00.136 } 00:05:00.136 ]' 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.136 [2024-11-02 23:44:54.207258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:00.136 [2024-11-02 23:44:54.207351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.136 [2024-11-02 23:44:54.207395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:00.136 [2024-11-02 23:44:54.207408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.136 [2024-11-02 23:44:54.209798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.136 [2024-11-02 23:44:54.209835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.136 Passthru0 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.136 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.136 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.403 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.403 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.403 { 00:05:00.403 "name": "Malloc0", 00:05:00.403 "aliases": [ 00:05:00.403 "cdf86e52-2629-4da9-85b9-2da936ff684a" 00:05:00.403 ], 00:05:00.403 "product_name": "Malloc disk", 00:05:00.403 "block_size": 512, 00:05:00.403 "num_blocks": 16384, 00:05:00.403 "uuid": "cdf86e52-2629-4da9-85b9-2da936ff684a", 00:05:00.403 "assigned_rate_limits": { 00:05:00.403 "rw_ios_per_sec": 0, 00:05:00.403 "rw_mbytes_per_sec": 0, 00:05:00.403 "r_mbytes_per_sec": 0, 00:05:00.403 "w_mbytes_per_sec": 0 00:05:00.403 }, 00:05:00.403 "claimed": true, 00:05:00.403 "claim_type": "exclusive_write", 00:05:00.403 "zoned": false, 00:05:00.403 "supported_io_types": { 00:05:00.403 "read": true, 00:05:00.403 "write": true, 00:05:00.403 "unmap": true, 00:05:00.403 "flush": true, 00:05:00.403 "reset": true, 00:05:00.403 "nvme_admin": false, 00:05:00.403 "nvme_io": false, 00:05:00.403 "nvme_io_md": false, 00:05:00.403 "write_zeroes": true, 00:05:00.403 "zcopy": true, 00:05:00.403 "get_zone_info": false, 00:05:00.403 "zone_management": false, 00:05:00.403 "zone_append": false, 00:05:00.403 "compare": false, 00:05:00.403 "compare_and_write": false, 00:05:00.403 "abort": true, 00:05:00.403 "seek_hole": false, 00:05:00.403 "seek_data": false, 00:05:00.403 "copy": true, 00:05:00.403 "nvme_iov_md": false 00:05:00.403 }, 00:05:00.403 "memory_domains": [ 00:05:00.403 { 00:05:00.403 "dma_device_id": "system", 00:05:00.403 "dma_device_type": 1 00:05:00.403 }, 00:05:00.403 { 00:05:00.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.403 "dma_device_type": 2 00:05:00.403 } 00:05:00.403 ], 00:05:00.403 "driver_specific": {} 00:05:00.403 }, 00:05:00.403 { 00:05:00.403 "name": "Passthru0", 00:05:00.403 "aliases": [ 00:05:00.403 "49223b07-d725-5b2f-a1c7-0d66c9bb26f9" 00:05:00.403 ], 00:05:00.403 "product_name": "passthru", 00:05:00.403 "block_size": 512, 00:05:00.403 "num_blocks": 16384, 00:05:00.403 "uuid": "49223b07-d725-5b2f-a1c7-0d66c9bb26f9", 00:05:00.403 "assigned_rate_limits": { 00:05:00.403 "rw_ios_per_sec": 0, 00:05:00.403 "rw_mbytes_per_sec": 0, 00:05:00.403 "r_mbytes_per_sec": 0, 00:05:00.403 "w_mbytes_per_sec": 0 00:05:00.403 }, 00:05:00.403 "claimed": false, 00:05:00.403 "zoned": false, 00:05:00.403 "supported_io_types": { 00:05:00.403 "read": true, 00:05:00.403 "write": true, 00:05:00.403 "unmap": true, 00:05:00.403 "flush": true, 00:05:00.403 "reset": true, 00:05:00.403 "nvme_admin": false, 00:05:00.403 "nvme_io": false, 00:05:00.403 "nvme_io_md": false, 00:05:00.403 "write_zeroes": true, 00:05:00.403 "zcopy": true, 00:05:00.403 "get_zone_info": false, 00:05:00.403 "zone_management": false, 00:05:00.403 "zone_append": false, 00:05:00.403 "compare": false, 00:05:00.403 "compare_and_write": false, 00:05:00.403 "abort": true, 00:05:00.403 "seek_hole": false, 00:05:00.403 "seek_data": false, 00:05:00.403 "copy": true, 00:05:00.403 "nvme_iov_md": false 00:05:00.403 }, 00:05:00.403 "memory_domains": [ 00:05:00.403 { 00:05:00.403 "dma_device_id": "system", 00:05:00.403 "dma_device_type": 1 00:05:00.403 }, 00:05:00.403 { 00:05:00.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.403 "dma_device_type": 2 00:05:00.403 } 00:05:00.403 ], 00:05:00.403 "driver_specific": { 00:05:00.403 "passthru": { 00:05:00.403 "name": "Passthru0", 00:05:00.403 "base_bdev_name": "Malloc0" 00:05:00.403 } 00:05:00.404 } 00:05:00.404 } 00:05:00.404 ]' 00:05:00.404 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.404 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.404 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.404 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.404 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.404 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.404 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.404 23:44:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.404 00:05:00.404 real 0m0.311s 00:05:00.404 user 0m0.180s 00:05:00.404 sys 0m0.052s 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.404 23:44:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 ************************************ 00:05:00.404 END TEST rpc_integrity 00:05:00.404 ************************************ 00:05:00.404 23:44:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:00.404 23:44:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.404 23:44:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.404 23:44:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 ************************************ 00:05:00.404 START TEST rpc_plugins 00:05:00.404 ************************************ 00:05:00.404 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:00.404 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:00.404 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.404 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.404 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:00.404 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:00.404 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.404 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.404 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:00.404 { 00:05:00.404 "name": "Malloc1", 00:05:00.404 "aliases": [ 00:05:00.404 "6b271c0c-8d60-4c76-91ef-74db66118a40" 00:05:00.404 ], 00:05:00.404 "product_name": "Malloc disk", 00:05:00.404 "block_size": 4096, 00:05:00.404 "num_blocks": 256, 00:05:00.404 "uuid": "6b271c0c-8d60-4c76-91ef-74db66118a40", 00:05:00.404 "assigned_rate_limits": { 00:05:00.404 "rw_ios_per_sec": 0, 00:05:00.404 "rw_mbytes_per_sec": 0, 00:05:00.404 "r_mbytes_per_sec": 0, 00:05:00.404 "w_mbytes_per_sec": 0 00:05:00.404 }, 00:05:00.404 "claimed": false, 00:05:00.404 "zoned": false, 00:05:00.404 "supported_io_types": { 00:05:00.404 "read": true, 00:05:00.404 "write": true, 00:05:00.404 "unmap": true, 00:05:00.404 "flush": true, 00:05:00.404 "reset": true, 00:05:00.404 "nvme_admin": false, 00:05:00.404 "nvme_io": false, 00:05:00.404 "nvme_io_md": false, 00:05:00.404 "write_zeroes": true, 00:05:00.404 "zcopy": true, 00:05:00.404 "get_zone_info": false, 00:05:00.404 "zone_management": false, 00:05:00.404 "zone_append": false, 00:05:00.404 "compare": false, 00:05:00.404 "compare_and_write": false, 00:05:00.404 "abort": true, 00:05:00.404 "seek_hole": false, 00:05:00.404 "seek_data": false, 00:05:00.404 "copy": true, 00:05:00.404 "nvme_iov_md": false 00:05:00.404 }, 00:05:00.404 "memory_domains": [ 00:05:00.404 { 00:05:00.404 "dma_device_id": "system", 00:05:00.404 "dma_device_type": 1 00:05:00.404 }, 00:05:00.404 { 00:05:00.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.404 "dma_device_type": 2 00:05:00.404 } 00:05:00.404 ], 00:05:00.404 "driver_specific": {} 00:05:00.404 } 00:05:00.404 ]' 00:05:00.404 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:00.666 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:00.666 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:00.666 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.666 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.666 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.666 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:00.666 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.666 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.666 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.666 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:00.666 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:00.666 23:44:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:00.666 00:05:00.666 real 0m0.159s 00:05:00.666 user 0m0.097s 00:05:00.666 sys 0m0.020s 00:05:00.666 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.666 23:44:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.666 ************************************ 00:05:00.666 END TEST rpc_plugins 00:05:00.666 ************************************ 00:05:00.666 23:44:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:00.666 23:44:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.666 23:44:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.666 23:44:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.666 ************************************ 00:05:00.666 START TEST rpc_trace_cmd_test 00:05:00.666 ************************************ 00:05:00.666 23:44:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:00.666 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:00.666 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:00.666 23:44:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.666 23:44:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.666 23:44:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.666 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:00.666 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68920", 00:05:00.666 "tpoint_group_mask": "0x8", 00:05:00.666 "iscsi_conn": { 00:05:00.666 "mask": "0x2", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "scsi": { 00:05:00.666 "mask": "0x4", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "bdev": { 00:05:00.666 "mask": "0x8", 00:05:00.666 "tpoint_mask": "0xffffffffffffffff" 00:05:00.666 }, 00:05:00.666 "nvmf_rdma": { 00:05:00.666 "mask": "0x10", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "nvmf_tcp": { 00:05:00.666 "mask": "0x20", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "ftl": { 00:05:00.666 "mask": "0x40", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "blobfs": { 00:05:00.666 "mask": "0x80", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "dsa": { 00:05:00.666 "mask": "0x200", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "thread": { 00:05:00.666 "mask": "0x400", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "nvme_pcie": { 00:05:00.666 "mask": "0x800", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "iaa": { 00:05:00.666 "mask": "0x1000", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "nvme_tcp": { 00:05:00.666 "mask": "0x2000", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "bdev_nvme": { 00:05:00.666 "mask": "0x4000", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "sock": { 00:05:00.666 "mask": "0x8000", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "blob": { 00:05:00.666 "mask": "0x10000", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "bdev_raid": { 00:05:00.666 "mask": "0x20000", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 }, 00:05:00.666 "scheduler": { 00:05:00.666 "mask": "0x40000", 00:05:00.666 "tpoint_mask": "0x0" 00:05:00.666 } 00:05:00.666 }' 00:05:00.666 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:00.666 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:00.666 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:00.926 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:00.926 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:00.926 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:00.926 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:00.926 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:00.926 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:00.926 23:44:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:00.926 00:05:00.926 real 0m0.214s 00:05:00.926 user 0m0.173s 00:05:00.926 sys 0m0.034s 00:05:00.926 23:44:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.926 23:44:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.926 ************************************ 00:05:00.926 END TEST rpc_trace_cmd_test 00:05:00.926 ************************************ 00:05:00.926 23:44:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:00.926 23:44:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:00.926 23:44:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:00.926 23:44:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.926 23:44:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.926 23:44:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.926 ************************************ 00:05:00.926 START TEST rpc_daemon_integrity 00:05:00.926 ************************************ 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.926 23:44:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.926 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.926 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:00.926 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.926 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.926 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.186 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.186 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.186 { 00:05:01.186 "name": "Malloc2", 00:05:01.186 "aliases": [ 00:05:01.186 "2c89ca69-d0be-4f2b-ab5f-4db98172377d" 00:05:01.186 ], 00:05:01.186 "product_name": "Malloc disk", 00:05:01.186 "block_size": 512, 00:05:01.186 "num_blocks": 16384, 00:05:01.186 "uuid": "2c89ca69-d0be-4f2b-ab5f-4db98172377d", 00:05:01.186 "assigned_rate_limits": { 00:05:01.186 "rw_ios_per_sec": 0, 00:05:01.186 "rw_mbytes_per_sec": 0, 00:05:01.186 "r_mbytes_per_sec": 0, 00:05:01.186 "w_mbytes_per_sec": 0 00:05:01.186 }, 00:05:01.186 "claimed": false, 00:05:01.186 "zoned": false, 00:05:01.186 "supported_io_types": { 00:05:01.186 "read": true, 00:05:01.186 "write": true, 00:05:01.186 "unmap": true, 00:05:01.186 "flush": true, 00:05:01.186 "reset": true, 00:05:01.186 "nvme_admin": false, 00:05:01.186 "nvme_io": false, 00:05:01.186 "nvme_io_md": false, 00:05:01.186 "write_zeroes": true, 00:05:01.186 "zcopy": true, 00:05:01.186 "get_zone_info": false, 00:05:01.186 "zone_management": false, 00:05:01.186 "zone_append": false, 00:05:01.186 "compare": false, 00:05:01.186 "compare_and_write": false, 00:05:01.186 "abort": true, 00:05:01.186 "seek_hole": false, 00:05:01.186 "seek_data": false, 00:05:01.186 "copy": true, 00:05:01.186 "nvme_iov_md": false 00:05:01.186 }, 00:05:01.186 "memory_domains": [ 00:05:01.186 { 00:05:01.186 "dma_device_id": "system", 00:05:01.186 "dma_device_type": 1 00:05:01.186 }, 00:05:01.186 { 00:05:01.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.186 "dma_device_type": 2 00:05:01.186 } 00:05:01.186 ], 00:05:01.186 "driver_specific": {} 00:05:01.186 } 00:05:01.186 ]' 00:05:01.186 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.186 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.186 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:01.186 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.186 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.186 [2024-11-02 23:44:55.074362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:01.186 [2024-11-02 23:44:55.074426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.187 [2024-11-02 23:44:55.074452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:01.187 [2024-11-02 23:44:55.074461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.187 [2024-11-02 23:44:55.076762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.187 [2024-11-02 23:44:55.076792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.187 Passthru0 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.187 { 00:05:01.187 "name": "Malloc2", 00:05:01.187 "aliases": [ 00:05:01.187 "2c89ca69-d0be-4f2b-ab5f-4db98172377d" 00:05:01.187 ], 00:05:01.187 "product_name": "Malloc disk", 00:05:01.187 "block_size": 512, 00:05:01.187 "num_blocks": 16384, 00:05:01.187 "uuid": "2c89ca69-d0be-4f2b-ab5f-4db98172377d", 00:05:01.187 "assigned_rate_limits": { 00:05:01.187 "rw_ios_per_sec": 0, 00:05:01.187 "rw_mbytes_per_sec": 0, 00:05:01.187 "r_mbytes_per_sec": 0, 00:05:01.187 "w_mbytes_per_sec": 0 00:05:01.187 }, 00:05:01.187 "claimed": true, 00:05:01.187 "claim_type": "exclusive_write", 00:05:01.187 "zoned": false, 00:05:01.187 "supported_io_types": { 00:05:01.187 "read": true, 00:05:01.187 "write": true, 00:05:01.187 "unmap": true, 00:05:01.187 "flush": true, 00:05:01.187 "reset": true, 00:05:01.187 "nvme_admin": false, 00:05:01.187 "nvme_io": false, 00:05:01.187 "nvme_io_md": false, 00:05:01.187 "write_zeroes": true, 00:05:01.187 "zcopy": true, 00:05:01.187 "get_zone_info": false, 00:05:01.187 "zone_management": false, 00:05:01.187 "zone_append": false, 00:05:01.187 "compare": false, 00:05:01.187 "compare_and_write": false, 00:05:01.187 "abort": true, 00:05:01.187 "seek_hole": false, 00:05:01.187 "seek_data": false, 00:05:01.187 "copy": true, 00:05:01.187 "nvme_iov_md": false 00:05:01.187 }, 00:05:01.187 "memory_domains": [ 00:05:01.187 { 00:05:01.187 "dma_device_id": "system", 00:05:01.187 "dma_device_type": 1 00:05:01.187 }, 00:05:01.187 { 00:05:01.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.187 "dma_device_type": 2 00:05:01.187 } 00:05:01.187 ], 00:05:01.187 "driver_specific": {} 00:05:01.187 }, 00:05:01.187 { 00:05:01.187 "name": "Passthru0", 00:05:01.187 "aliases": [ 00:05:01.187 "faa7b541-deff-508a-884f-90a927a5ecb8" 00:05:01.187 ], 00:05:01.187 "product_name": "passthru", 00:05:01.187 "block_size": 512, 00:05:01.187 "num_blocks": 16384, 00:05:01.187 "uuid": "faa7b541-deff-508a-884f-90a927a5ecb8", 00:05:01.187 "assigned_rate_limits": { 00:05:01.187 "rw_ios_per_sec": 0, 00:05:01.187 "rw_mbytes_per_sec": 0, 00:05:01.187 "r_mbytes_per_sec": 0, 00:05:01.187 "w_mbytes_per_sec": 0 00:05:01.187 }, 00:05:01.187 "claimed": false, 00:05:01.187 "zoned": false, 00:05:01.187 "supported_io_types": { 00:05:01.187 "read": true, 00:05:01.187 "write": true, 00:05:01.187 "unmap": true, 00:05:01.187 "flush": true, 00:05:01.187 "reset": true, 00:05:01.187 "nvme_admin": false, 00:05:01.187 "nvme_io": false, 00:05:01.187 "nvme_io_md": false, 00:05:01.187 "write_zeroes": true, 00:05:01.187 "zcopy": true, 00:05:01.187 "get_zone_info": false, 00:05:01.187 "zone_management": false, 00:05:01.187 "zone_append": false, 00:05:01.187 "compare": false, 00:05:01.187 "compare_and_write": false, 00:05:01.187 "abort": true, 00:05:01.187 "seek_hole": false, 00:05:01.187 "seek_data": false, 00:05:01.187 "copy": true, 00:05:01.187 "nvme_iov_md": false 00:05:01.187 }, 00:05:01.187 "memory_domains": [ 00:05:01.187 { 00:05:01.187 "dma_device_id": "system", 00:05:01.187 "dma_device_type": 1 00:05:01.187 }, 00:05:01.187 { 00:05:01.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.187 "dma_device_type": 2 00:05:01.187 } 00:05:01.187 ], 00:05:01.187 "driver_specific": { 00:05:01.187 "passthru": { 00:05:01.187 "name": "Passthru0", 00:05:01.187 "base_bdev_name": "Malloc2" 00:05:01.187 } 00:05:01.187 } 00:05:01.187 } 00:05:01.187 ]' 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.187 00:05:01.187 real 0m0.291s 00:05:01.187 user 0m0.176s 00:05:01.187 sys 0m0.044s 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.187 23:44:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.187 ************************************ 00:05:01.187 END TEST rpc_daemon_integrity 00:05:01.187 ************************************ 00:05:01.187 23:44:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.187 23:44:55 rpc -- rpc/rpc.sh@84 -- # killprocess 68920 00:05:01.187 23:44:55 rpc -- common/autotest_common.sh@952 -- # '[' -z 68920 ']' 00:05:01.187 23:44:55 rpc -- common/autotest_common.sh@956 -- # kill -0 68920 00:05:01.187 23:44:55 rpc -- common/autotest_common.sh@957 -- # uname 00:05:01.187 23:44:55 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:01.447 23:44:55 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68920 00:05:01.447 23:44:55 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:01.447 23:44:55 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:01.447 killing process with pid 68920 00:05:01.447 23:44:55 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68920' 00:05:01.447 23:44:55 rpc -- common/autotest_common.sh@971 -- # kill 68920 00:05:01.447 23:44:55 rpc -- common/autotest_common.sh@976 -- # wait 68920 00:05:01.717 00:05:01.717 real 0m2.769s 00:05:01.717 user 0m3.332s 00:05:01.717 sys 0m0.824s 00:05:01.717 23:44:55 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.717 23:44:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.717 ************************************ 00:05:01.717 END TEST rpc 00:05:01.717 ************************************ 00:05:01.717 23:44:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:01.717 23:44:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.717 23:44:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.717 23:44:55 -- common/autotest_common.sh@10 -- # set +x 00:05:01.717 ************************************ 00:05:01.717 START TEST skip_rpc 00:05:01.717 ************************************ 00:05:01.717 23:44:55 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:01.976 * Looking for test storage... 00:05:01.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.976 23:44:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.976 --rc genhtml_branch_coverage=1 00:05:01.976 --rc genhtml_function_coverage=1 00:05:01.976 --rc genhtml_legend=1 00:05:01.976 --rc geninfo_all_blocks=1 00:05:01.976 --rc geninfo_unexecuted_blocks=1 00:05:01.976 00:05:01.976 ' 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.976 --rc genhtml_branch_coverage=1 00:05:01.976 --rc genhtml_function_coverage=1 00:05:01.976 --rc genhtml_legend=1 00:05:01.976 --rc geninfo_all_blocks=1 00:05:01.976 --rc geninfo_unexecuted_blocks=1 00:05:01.976 00:05:01.976 ' 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.976 --rc genhtml_branch_coverage=1 00:05:01.976 --rc genhtml_function_coverage=1 00:05:01.976 --rc genhtml_legend=1 00:05:01.976 --rc geninfo_all_blocks=1 00:05:01.976 --rc geninfo_unexecuted_blocks=1 00:05:01.976 00:05:01.976 ' 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.976 --rc genhtml_branch_coverage=1 00:05:01.976 --rc genhtml_function_coverage=1 00:05:01.976 --rc genhtml_legend=1 00:05:01.976 --rc geninfo_all_blocks=1 00:05:01.976 --rc geninfo_unexecuted_blocks=1 00:05:01.976 00:05:01.976 ' 00:05:01.976 23:44:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.976 23:44:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:01.976 23:44:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.976 23:44:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.976 ************************************ 00:05:01.976 START TEST skip_rpc 00:05:01.976 ************************************ 00:05:01.976 23:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:01.976 23:44:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69121 00:05:01.976 23:44:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:01.976 23:44:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.976 23:44:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.236 [2024-11-02 23:44:56.072570] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:02.236 [2024-11-02 23:44:56.072694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69121 ] 00:05:02.236 [2024-11-02 23:44:56.214330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.236 [2024-11-02 23:44:56.239844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69121 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 69121 ']' 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 69121 00:05:07.509 23:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:07.509 23:45:01 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.509 23:45:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69121 00:05:07.509 23:45:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:07.509 23:45:01 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:07.509 killing process with pid 69121 00:05:07.509 23:45:01 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69121' 00:05:07.509 23:45:01 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 69121 00:05:07.509 23:45:01 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 69121 00:05:07.509 00:05:07.509 real 0m5.418s 00:05:07.509 user 0m5.043s 00:05:07.509 sys 0m0.302s 00:05:07.509 23:45:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.509 23:45:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.509 ************************************ 00:05:07.509 END TEST skip_rpc 00:05:07.509 ************************************ 00:05:07.509 23:45:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:07.509 23:45:01 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.509 23:45:01 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.510 23:45:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.510 ************************************ 00:05:07.510 START TEST skip_rpc_with_json 00:05:07.510 ************************************ 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69209 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69209 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 69209 ']' 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.510 23:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.510 [2024-11-02 23:45:01.554305] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:07.510 [2024-11-02 23:45:01.554451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69209 ] 00:05:07.768 [2024-11-02 23:45:01.687312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.768 [2024-11-02 23:45:01.713493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.334 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.334 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:08.334 23:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:08.335 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.335 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.335 [2024-11-02 23:45:02.384490] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:08.335 request: 00:05:08.335 { 00:05:08.335 "trtype": "tcp", 00:05:08.335 "method": "nvmf_get_transports", 00:05:08.335 "req_id": 1 00:05:08.335 } 00:05:08.335 Got JSON-RPC error response 00:05:08.335 response: 00:05:08.335 { 00:05:08.335 "code": -19, 00:05:08.335 "message": "No such device" 00:05:08.335 } 00:05:08.335 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:08.335 23:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:08.335 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.335 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.335 [2024-11-02 23:45:02.396594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.335 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.335 23:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:08.335 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.335 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.595 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.595 23:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:08.595 { 00:05:08.595 "subsystems": [ 00:05:08.595 { 00:05:08.595 "subsystem": "fsdev", 00:05:08.595 "config": [ 00:05:08.595 { 00:05:08.595 "method": "fsdev_set_opts", 00:05:08.595 "params": { 00:05:08.595 "fsdev_io_pool_size": 65535, 00:05:08.595 "fsdev_io_cache_size": 256 00:05:08.595 } 00:05:08.595 } 00:05:08.595 ] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "keyring", 00:05:08.595 "config": [] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "iobuf", 00:05:08.595 "config": [ 00:05:08.595 { 00:05:08.595 "method": "iobuf_set_options", 00:05:08.595 "params": { 00:05:08.595 "small_pool_count": 8192, 00:05:08.595 "large_pool_count": 1024, 00:05:08.595 "small_bufsize": 8192, 00:05:08.595 "large_bufsize": 135168, 00:05:08.595 "enable_numa": false 00:05:08.595 } 00:05:08.595 } 00:05:08.595 ] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "sock", 00:05:08.595 "config": [ 00:05:08.595 { 00:05:08.595 "method": "sock_set_default_impl", 00:05:08.595 "params": { 00:05:08.595 "impl_name": "posix" 00:05:08.595 } 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "method": "sock_impl_set_options", 00:05:08.595 "params": { 00:05:08.595 "impl_name": "ssl", 00:05:08.595 "recv_buf_size": 4096, 00:05:08.595 "send_buf_size": 4096, 00:05:08.595 "enable_recv_pipe": true, 00:05:08.595 "enable_quickack": false, 00:05:08.595 "enable_placement_id": 0, 00:05:08.595 "enable_zerocopy_send_server": true, 00:05:08.595 "enable_zerocopy_send_client": false, 00:05:08.595 "zerocopy_threshold": 0, 00:05:08.595 "tls_version": 0, 00:05:08.595 "enable_ktls": false 00:05:08.595 } 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "method": "sock_impl_set_options", 00:05:08.595 "params": { 00:05:08.595 "impl_name": "posix", 00:05:08.595 "recv_buf_size": 2097152, 00:05:08.595 "send_buf_size": 2097152, 00:05:08.595 "enable_recv_pipe": true, 00:05:08.595 "enable_quickack": false, 00:05:08.595 "enable_placement_id": 0, 00:05:08.595 "enable_zerocopy_send_server": true, 00:05:08.595 "enable_zerocopy_send_client": false, 00:05:08.595 "zerocopy_threshold": 0, 00:05:08.595 "tls_version": 0, 00:05:08.595 "enable_ktls": false 00:05:08.595 } 00:05:08.595 } 00:05:08.595 ] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "vmd", 00:05:08.595 "config": [] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "accel", 00:05:08.595 "config": [ 00:05:08.595 { 00:05:08.595 "method": "accel_set_options", 00:05:08.595 "params": { 00:05:08.595 "small_cache_size": 128, 00:05:08.595 "large_cache_size": 16, 00:05:08.595 "task_count": 2048, 00:05:08.595 "sequence_count": 2048, 00:05:08.595 "buf_count": 2048 00:05:08.595 } 00:05:08.595 } 00:05:08.595 ] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "bdev", 00:05:08.595 "config": [ 00:05:08.595 { 00:05:08.595 "method": "bdev_set_options", 00:05:08.595 "params": { 00:05:08.595 "bdev_io_pool_size": 65535, 00:05:08.595 "bdev_io_cache_size": 256, 00:05:08.595 "bdev_auto_examine": true, 00:05:08.595 "iobuf_small_cache_size": 128, 00:05:08.595 "iobuf_large_cache_size": 16 00:05:08.595 } 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "method": "bdev_raid_set_options", 00:05:08.595 "params": { 00:05:08.595 "process_window_size_kb": 1024, 00:05:08.595 "process_max_bandwidth_mb_sec": 0 00:05:08.595 } 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "method": "bdev_iscsi_set_options", 00:05:08.595 "params": { 00:05:08.595 "timeout_sec": 30 00:05:08.595 } 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "method": "bdev_nvme_set_options", 00:05:08.595 "params": { 00:05:08.595 "action_on_timeout": "none", 00:05:08.595 "timeout_us": 0, 00:05:08.595 "timeout_admin_us": 0, 00:05:08.595 "keep_alive_timeout_ms": 10000, 00:05:08.595 "arbitration_burst": 0, 00:05:08.595 "low_priority_weight": 0, 00:05:08.595 "medium_priority_weight": 0, 00:05:08.595 "high_priority_weight": 0, 00:05:08.595 "nvme_adminq_poll_period_us": 10000, 00:05:08.595 "nvme_ioq_poll_period_us": 0, 00:05:08.595 "io_queue_requests": 0, 00:05:08.595 "delay_cmd_submit": true, 00:05:08.595 "transport_retry_count": 4, 00:05:08.595 "bdev_retry_count": 3, 00:05:08.595 "transport_ack_timeout": 0, 00:05:08.595 "ctrlr_loss_timeout_sec": 0, 00:05:08.595 "reconnect_delay_sec": 0, 00:05:08.595 "fast_io_fail_timeout_sec": 0, 00:05:08.595 "disable_auto_failback": false, 00:05:08.595 "generate_uuids": false, 00:05:08.595 "transport_tos": 0, 00:05:08.595 "nvme_error_stat": false, 00:05:08.595 "rdma_srq_size": 0, 00:05:08.595 "io_path_stat": false, 00:05:08.595 "allow_accel_sequence": false, 00:05:08.595 "rdma_max_cq_size": 0, 00:05:08.595 "rdma_cm_event_timeout_ms": 0, 00:05:08.595 "dhchap_digests": [ 00:05:08.595 "sha256", 00:05:08.595 "sha384", 00:05:08.595 "sha512" 00:05:08.595 ], 00:05:08.595 "dhchap_dhgroups": [ 00:05:08.595 "null", 00:05:08.595 "ffdhe2048", 00:05:08.595 "ffdhe3072", 00:05:08.595 "ffdhe4096", 00:05:08.595 "ffdhe6144", 00:05:08.595 "ffdhe8192" 00:05:08.595 ] 00:05:08.595 } 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "method": "bdev_nvme_set_hotplug", 00:05:08.595 "params": { 00:05:08.595 "period_us": 100000, 00:05:08.595 "enable": false 00:05:08.595 } 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "method": "bdev_wait_for_examine" 00:05:08.595 } 00:05:08.595 ] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "scsi", 00:05:08.595 "config": null 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "scheduler", 00:05:08.595 "config": [ 00:05:08.595 { 00:05:08.595 "method": "framework_set_scheduler", 00:05:08.595 "params": { 00:05:08.595 "name": "static" 00:05:08.595 } 00:05:08.595 } 00:05:08.595 ] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "vhost_scsi", 00:05:08.595 "config": [] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "vhost_blk", 00:05:08.595 "config": [] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "ublk", 00:05:08.595 "config": [] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "nbd", 00:05:08.595 "config": [] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "nvmf", 00:05:08.595 "config": [ 00:05:08.595 { 00:05:08.595 "method": "nvmf_set_config", 00:05:08.595 "params": { 00:05:08.595 "discovery_filter": "match_any", 00:05:08.595 "admin_cmd_passthru": { 00:05:08.595 "identify_ctrlr": false 00:05:08.595 }, 00:05:08.595 "dhchap_digests": [ 00:05:08.595 "sha256", 00:05:08.595 "sha384", 00:05:08.595 "sha512" 00:05:08.595 ], 00:05:08.595 "dhchap_dhgroups": [ 00:05:08.595 "null", 00:05:08.595 "ffdhe2048", 00:05:08.595 "ffdhe3072", 00:05:08.595 "ffdhe4096", 00:05:08.595 "ffdhe6144", 00:05:08.595 "ffdhe8192" 00:05:08.595 ] 00:05:08.595 } 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "method": "nvmf_set_max_subsystems", 00:05:08.595 "params": { 00:05:08.595 "max_subsystems": 1024 00:05:08.595 } 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "method": "nvmf_set_crdt", 00:05:08.595 "params": { 00:05:08.595 "crdt1": 0, 00:05:08.595 "crdt2": 0, 00:05:08.595 "crdt3": 0 00:05:08.595 } 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "method": "nvmf_create_transport", 00:05:08.595 "params": { 00:05:08.595 "trtype": "TCP", 00:05:08.595 "max_queue_depth": 128, 00:05:08.595 "max_io_qpairs_per_ctrlr": 127, 00:05:08.595 "in_capsule_data_size": 4096, 00:05:08.595 "max_io_size": 131072, 00:05:08.595 "io_unit_size": 131072, 00:05:08.595 "max_aq_depth": 128, 00:05:08.595 "num_shared_buffers": 511, 00:05:08.595 "buf_cache_size": 4294967295, 00:05:08.595 "dif_insert_or_strip": false, 00:05:08.595 "zcopy": false, 00:05:08.595 "c2h_success": true, 00:05:08.595 "sock_priority": 0, 00:05:08.595 "abort_timeout_sec": 1, 00:05:08.595 "ack_timeout": 0, 00:05:08.595 "data_wr_pool_size": 0 00:05:08.595 } 00:05:08.595 } 00:05:08.595 ] 00:05:08.595 }, 00:05:08.595 { 00:05:08.595 "subsystem": "iscsi", 00:05:08.595 "config": [ 00:05:08.595 { 00:05:08.596 "method": "iscsi_set_options", 00:05:08.596 "params": { 00:05:08.596 "node_base": "iqn.2016-06.io.spdk", 00:05:08.596 "max_sessions": 128, 00:05:08.596 "max_connections_per_session": 2, 00:05:08.596 "max_queue_depth": 64, 00:05:08.596 "default_time2wait": 2, 00:05:08.596 "default_time2retain": 20, 00:05:08.596 "first_burst_length": 8192, 00:05:08.596 "immediate_data": true, 00:05:08.596 "allow_duplicated_isid": false, 00:05:08.596 "error_recovery_level": 0, 00:05:08.596 "nop_timeout": 60, 00:05:08.596 "nop_in_interval": 30, 00:05:08.596 "disable_chap": false, 00:05:08.596 "require_chap": false, 00:05:08.596 "mutual_chap": false, 00:05:08.596 "chap_group": 0, 00:05:08.596 "max_large_datain_per_connection": 64, 00:05:08.596 "max_r2t_per_connection": 4, 00:05:08.596 "pdu_pool_size": 36864, 00:05:08.596 "immediate_data_pool_size": 16384, 00:05:08.596 "data_out_pool_size": 2048 00:05:08.596 } 00:05:08.596 } 00:05:08.596 ] 00:05:08.596 } 00:05:08.596 ] 00:05:08.596 } 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69209 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 69209 ']' 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 69209 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69209 00:05:08.596 killing process with pid 69209 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69209' 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 69209 00:05:08.596 23:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 69209 00:05:09.165 23:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.165 23:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69237 00:05:09.165 23:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:14.447 23:45:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69237 00:05:14.447 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 69237 ']' 00:05:14.447 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 69237 00:05:14.447 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:14.447 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.447 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69237 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:14.447 killing process with pid 69237 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69237' 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 69237 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 69237 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.447 00:05:14.447 real 0m6.932s 00:05:14.447 user 0m6.532s 00:05:14.447 sys 0m0.692s 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.447 ************************************ 00:05:14.447 END TEST skip_rpc_with_json 00:05:14.447 ************************************ 00:05:14.447 23:45:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:14.447 23:45:08 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.447 23:45:08 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.447 23:45:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.447 ************************************ 00:05:14.447 START TEST skip_rpc_with_delay 00:05:14.447 ************************************ 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:14.447 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.711 [2024-11-02 23:45:08.571256] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:14.711 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:14.711 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:14.711 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:14.711 ************************************ 00:05:14.711 END TEST skip_rpc_with_delay 00:05:14.711 ************************************ 00:05:14.711 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:14.711 00:05:14.711 real 0m0.178s 00:05:14.711 user 0m0.088s 00:05:14.711 sys 0m0.088s 00:05:14.711 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.711 23:45:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:14.711 23:45:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:14.711 23:45:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:14.711 23:45:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:14.711 23:45:08 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.711 23:45:08 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.711 23:45:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.711 ************************************ 00:05:14.711 START TEST exit_on_failed_rpc_init 00:05:14.711 ************************************ 00:05:14.711 23:45:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:14.711 23:45:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69349 00:05:14.712 23:45:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.712 23:45:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69349 00:05:14.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.712 23:45:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 69349 ']' 00:05:14.712 23:45:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.712 23:45:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.712 23:45:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.712 23:45:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.712 23:45:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.972 [2024-11-02 23:45:08.807382] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:14.972 [2024-11-02 23:45:08.807527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69349 ] 00:05:14.972 [2024-11-02 23:45:08.941137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.972 [2024-11-02 23:45:08.970352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:15.911 23:45:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.911 [2024-11-02 23:45:09.741191] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:15.911 [2024-11-02 23:45:09.741427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69367 ] 00:05:15.911 [2024-11-02 23:45:09.886715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.911 [2024-11-02 23:45:09.916402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.911 [2024-11-02 23:45:09.916588] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:15.911 [2024-11-02 23:45:09.916643] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:15.911 [2024-11-02 23:45:09.916666] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69349 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 69349 ']' 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 69349 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69349 00:05:16.170 killing process with pid 69349 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69349' 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 69349 00:05:16.170 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 69349 00:05:16.430 00:05:16.430 real 0m1.691s 00:05:16.430 user 0m1.835s 00:05:16.430 sys 0m0.463s 00:05:16.430 ************************************ 00:05:16.430 END TEST exit_on_failed_rpc_init 00:05:16.430 ************************************ 00:05:16.430 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.430 23:45:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.430 23:45:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.430 ************************************ 00:05:16.430 END TEST skip_rpc 00:05:16.430 ************************************ 00:05:16.430 00:05:16.430 real 0m14.720s 00:05:16.430 user 0m13.709s 00:05:16.430 sys 0m1.852s 00:05:16.430 23:45:10 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.430 23:45:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.430 23:45:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:16.430 23:45:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.430 23:45:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.430 23:45:10 -- common/autotest_common.sh@10 -- # set +x 00:05:16.698 ************************************ 00:05:16.698 START TEST rpc_client 00:05:16.698 ************************************ 00:05:16.698 23:45:10 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:16.698 * Looking for test storage... 00:05:16.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:16.698 23:45:10 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.698 23:45:10 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.698 23:45:10 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.698 23:45:10 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.698 23:45:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:16.699 23:45:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.699 23:45:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.699 23:45:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.699 23:45:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:16.699 23:45:10 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.699 23:45:10 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.699 --rc genhtml_branch_coverage=1 00:05:16.699 --rc genhtml_function_coverage=1 00:05:16.699 --rc genhtml_legend=1 00:05:16.699 --rc geninfo_all_blocks=1 00:05:16.699 --rc geninfo_unexecuted_blocks=1 00:05:16.699 00:05:16.699 ' 00:05:16.699 23:45:10 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.699 --rc genhtml_branch_coverage=1 00:05:16.699 --rc genhtml_function_coverage=1 00:05:16.699 --rc genhtml_legend=1 00:05:16.699 --rc geninfo_all_blocks=1 00:05:16.699 --rc geninfo_unexecuted_blocks=1 00:05:16.699 00:05:16.699 ' 00:05:16.699 23:45:10 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.699 --rc genhtml_branch_coverage=1 00:05:16.699 --rc genhtml_function_coverage=1 00:05:16.699 --rc genhtml_legend=1 00:05:16.699 --rc geninfo_all_blocks=1 00:05:16.699 --rc geninfo_unexecuted_blocks=1 00:05:16.699 00:05:16.699 ' 00:05:16.699 23:45:10 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.699 --rc genhtml_branch_coverage=1 00:05:16.699 --rc genhtml_function_coverage=1 00:05:16.699 --rc genhtml_legend=1 00:05:16.699 --rc geninfo_all_blocks=1 00:05:16.699 --rc geninfo_unexecuted_blocks=1 00:05:16.699 00:05:16.699 ' 00:05:16.699 23:45:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:16.963 OK 00:05:16.963 23:45:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:16.963 00:05:16.963 real 0m0.293s 00:05:16.963 user 0m0.148s 00:05:16.963 sys 0m0.161s 00:05:16.963 23:45:10 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.963 23:45:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:16.963 ************************************ 00:05:16.963 END TEST rpc_client 00:05:16.963 ************************************ 00:05:16.963 23:45:10 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:16.963 23:45:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.963 23:45:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.963 23:45:10 -- common/autotest_common.sh@10 -- # set +x 00:05:16.963 ************************************ 00:05:16.963 START TEST json_config 00:05:16.963 ************************************ 00:05:16.963 23:45:10 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:16.963 23:45:10 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.963 23:45:10 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.963 23:45:10 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.963 23:45:11 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.963 23:45:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.963 23:45:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.963 23:45:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.963 23:45:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.963 23:45:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.963 23:45:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.963 23:45:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.963 23:45:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.963 23:45:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.963 23:45:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.963 23:45:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.963 23:45:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:16.963 23:45:11 json_config -- scripts/common.sh@345 -- # : 1 00:05:16.963 23:45:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.963 23:45:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.223 23:45:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:17.223 23:45:11 json_config -- scripts/common.sh@353 -- # local d=1 00:05:17.223 23:45:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.223 23:45:11 json_config -- scripts/common.sh@355 -- # echo 1 00:05:17.223 23:45:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.223 23:45:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:17.223 23:45:11 json_config -- scripts/common.sh@353 -- # local d=2 00:05:17.223 23:45:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.223 23:45:11 json_config -- scripts/common.sh@355 -- # echo 2 00:05:17.223 23:45:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.223 23:45:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.223 23:45:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.223 23:45:11 json_config -- scripts/common.sh@368 -- # return 0 00:05:17.223 23:45:11 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.223 23:45:11 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.223 --rc genhtml_branch_coverage=1 00:05:17.223 --rc genhtml_function_coverage=1 00:05:17.223 --rc genhtml_legend=1 00:05:17.223 --rc geninfo_all_blocks=1 00:05:17.223 --rc geninfo_unexecuted_blocks=1 00:05:17.223 00:05:17.223 ' 00:05:17.223 23:45:11 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.223 --rc genhtml_branch_coverage=1 00:05:17.223 --rc genhtml_function_coverage=1 00:05:17.223 --rc genhtml_legend=1 00:05:17.223 --rc geninfo_all_blocks=1 00:05:17.223 --rc geninfo_unexecuted_blocks=1 00:05:17.223 00:05:17.223 ' 00:05:17.223 23:45:11 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.223 --rc genhtml_branch_coverage=1 00:05:17.223 --rc genhtml_function_coverage=1 00:05:17.223 --rc genhtml_legend=1 00:05:17.223 --rc geninfo_all_blocks=1 00:05:17.223 --rc geninfo_unexecuted_blocks=1 00:05:17.223 00:05:17.223 ' 00:05:17.223 23:45:11 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.223 --rc genhtml_branch_coverage=1 00:05:17.223 --rc genhtml_function_coverage=1 00:05:17.223 --rc genhtml_legend=1 00:05:17.223 --rc geninfo_all_blocks=1 00:05:17.223 --rc geninfo_unexecuted_blocks=1 00:05:17.223 00:05:17.223 ' 00:05:17.223 23:45:11 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0135e85e-f94b-4193-aab3-cb65764a45eb 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0135e85e-f94b-4193-aab3-cb65764a45eb 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.223 23:45:11 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:17.223 23:45:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.223 23:45:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.223 23:45:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.223 23:45:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.223 23:45:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.223 23:45:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.223 23:45:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.223 23:45:11 json_config -- paths/export.sh@5 -- # export PATH 00:05:17.224 23:45:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.224 23:45:11 json_config -- nvmf/common.sh@51 -- # : 0 00:05:17.224 23:45:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.224 23:45:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.224 23:45:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.224 23:45:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.224 23:45:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.224 23:45:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.224 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.224 23:45:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.224 23:45:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.224 23:45:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.224 23:45:11 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:17.224 23:45:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:17.224 23:45:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:17.224 23:45:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:17.224 23:45:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:17.224 23:45:11 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:17.224 WARNING: No tests are enabled so not running JSON configuration tests 00:05:17.224 23:45:11 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:17.224 00:05:17.224 real 0m0.235s 00:05:17.224 user 0m0.142s 00:05:17.224 sys 0m0.095s 00:05:17.224 23:45:11 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.224 23:45:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.224 ************************************ 00:05:17.224 END TEST json_config 00:05:17.224 ************************************ 00:05:17.224 23:45:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:17.224 23:45:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.224 23:45:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.224 23:45:11 -- common/autotest_common.sh@10 -- # set +x 00:05:17.224 ************************************ 00:05:17.224 START TEST json_config_extra_key 00:05:17.224 ************************************ 00:05:17.224 23:45:11 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:17.224 23:45:11 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.224 23:45:11 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.224 23:45:11 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.484 23:45:11 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:17.484 23:45:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.485 23:45:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:17.485 23:45:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.485 23:45:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.485 23:45:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.485 23:45:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.485 --rc genhtml_branch_coverage=1 00:05:17.485 --rc genhtml_function_coverage=1 00:05:17.485 --rc genhtml_legend=1 00:05:17.485 --rc geninfo_all_blocks=1 00:05:17.485 --rc geninfo_unexecuted_blocks=1 00:05:17.485 00:05:17.485 ' 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.485 --rc genhtml_branch_coverage=1 00:05:17.485 --rc genhtml_function_coverage=1 00:05:17.485 --rc genhtml_legend=1 00:05:17.485 --rc geninfo_all_blocks=1 00:05:17.485 --rc geninfo_unexecuted_blocks=1 00:05:17.485 00:05:17.485 ' 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.485 --rc genhtml_branch_coverage=1 00:05:17.485 --rc genhtml_function_coverage=1 00:05:17.485 --rc genhtml_legend=1 00:05:17.485 --rc geninfo_all_blocks=1 00:05:17.485 --rc geninfo_unexecuted_blocks=1 00:05:17.485 00:05:17.485 ' 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.485 --rc genhtml_branch_coverage=1 00:05:17.485 --rc genhtml_function_coverage=1 00:05:17.485 --rc genhtml_legend=1 00:05:17.485 --rc geninfo_all_blocks=1 00:05:17.485 --rc geninfo_unexecuted_blocks=1 00:05:17.485 00:05:17.485 ' 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0135e85e-f94b-4193-aab3-cb65764a45eb 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0135e85e-f94b-4193-aab3-cb65764a45eb 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:17.485 23:45:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.485 23:45:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.485 23:45:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.485 23:45:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.485 23:45:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.485 23:45:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.485 23:45:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.485 23:45:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:17.485 23:45:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.485 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.485 23:45:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:17.485 INFO: launching applications... 00:05:17.485 23:45:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.485 Waiting for target to run... 00:05:17.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69549 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69549 /var/tmp/spdk_tgt.sock 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 69549 ']' 00:05:17.485 23:45:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.485 23:45:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.485 [2024-11-02 23:45:11.505581] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:17.485 [2024-11-02 23:45:11.506138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69549 ] 00:05:18.055 [2024-11-02 23:45:11.877231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.055 [2024-11-02 23:45:11.895212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.314 23:45:12 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.314 23:45:12 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:18.314 23:45:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:18.314 00:05:18.314 23:45:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:18.314 INFO: shutting down applications... 00:05:18.314 23:45:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:18.314 23:45:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:18.314 23:45:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.314 23:45:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69549 ]] 00:05:18.314 23:45:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69549 00:05:18.314 23:45:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.314 23:45:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.314 23:45:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69549 00:05:18.314 23:45:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.889 23:45:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.889 23:45:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.889 23:45:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69549 00:05:18.889 23:45:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:18.889 23:45:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:18.889 23:45:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:18.889 23:45:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:18.889 SPDK target shutdown done 00:05:18.889 23:45:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:18.889 Success 00:05:18.889 00:05:18.889 real 0m1.666s 00:05:18.889 user 0m1.451s 00:05:18.889 sys 0m0.487s 00:05:18.889 23:45:12 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.889 23:45:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.889 ************************************ 00:05:18.889 END TEST json_config_extra_key 00:05:18.889 ************************************ 00:05:18.889 23:45:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:18.889 23:45:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.889 23:45:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.889 23:45:12 -- common/autotest_common.sh@10 -- # set +x 00:05:18.889 ************************************ 00:05:18.889 START TEST alias_rpc 00:05:18.889 ************************************ 00:05:18.889 23:45:12 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.152 * Looking for test storage... 00:05:19.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:19.152 23:45:13 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.152 23:45:13 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.152 23:45:13 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.152 23:45:13 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.152 23:45:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:19.152 23:45:13 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.152 23:45:13 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.152 --rc genhtml_branch_coverage=1 00:05:19.152 --rc genhtml_function_coverage=1 00:05:19.152 --rc genhtml_legend=1 00:05:19.152 --rc geninfo_all_blocks=1 00:05:19.152 --rc geninfo_unexecuted_blocks=1 00:05:19.152 00:05:19.152 ' 00:05:19.153 23:45:13 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.153 --rc genhtml_branch_coverage=1 00:05:19.153 --rc genhtml_function_coverage=1 00:05:19.153 --rc genhtml_legend=1 00:05:19.153 --rc geninfo_all_blocks=1 00:05:19.153 --rc geninfo_unexecuted_blocks=1 00:05:19.153 00:05:19.153 ' 00:05:19.153 23:45:13 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.153 --rc genhtml_branch_coverage=1 00:05:19.153 --rc genhtml_function_coverage=1 00:05:19.153 --rc genhtml_legend=1 00:05:19.153 --rc geninfo_all_blocks=1 00:05:19.153 --rc geninfo_unexecuted_blocks=1 00:05:19.153 00:05:19.153 ' 00:05:19.153 23:45:13 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.153 --rc genhtml_branch_coverage=1 00:05:19.153 --rc genhtml_function_coverage=1 00:05:19.153 --rc genhtml_legend=1 00:05:19.153 --rc geninfo_all_blocks=1 00:05:19.153 --rc geninfo_unexecuted_blocks=1 00:05:19.153 00:05:19.153 ' 00:05:19.153 23:45:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:19.153 23:45:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69623 00:05:19.153 23:45:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.153 23:45:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69623 00:05:19.153 23:45:13 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 69623 ']' 00:05:19.153 23:45:13 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.153 23:45:13 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.153 23:45:13 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.153 23:45:13 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.153 23:45:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.153 [2024-11-02 23:45:13.241360] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:19.153 [2024-11-02 23:45:13.241587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69623 ] 00:05:19.412 [2024-11-02 23:45:13.375251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.413 [2024-11-02 23:45:13.415199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.981 23:45:14 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.981 23:45:14 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:19.981 23:45:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:20.239 23:45:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69623 00:05:20.239 23:45:14 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 69623 ']' 00:05:20.239 23:45:14 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 69623 00:05:20.239 23:45:14 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:20.239 23:45:14 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:20.239 23:45:14 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69623 00:05:20.239 killing process with pid 69623 00:05:20.239 23:45:14 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:20.239 23:45:14 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:20.239 23:45:14 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69623' 00:05:20.239 23:45:14 alias_rpc -- common/autotest_common.sh@971 -- # kill 69623 00:05:20.239 23:45:14 alias_rpc -- common/autotest_common.sh@976 -- # wait 69623 00:05:21.176 ************************************ 00:05:21.176 END TEST alias_rpc 00:05:21.176 ************************************ 00:05:21.176 00:05:21.176 real 0m2.006s 00:05:21.176 user 0m1.873s 00:05:21.176 sys 0m0.650s 00:05:21.176 23:45:14 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.176 23:45:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.176 23:45:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:21.176 23:45:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.176 23:45:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:21.176 23:45:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.176 23:45:14 -- common/autotest_common.sh@10 -- # set +x 00:05:21.176 ************************************ 00:05:21.176 START TEST spdkcli_tcp 00:05:21.176 ************************************ 00:05:21.176 23:45:14 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.176 * Looking for test storage... 00:05:21.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:21.176 23:45:15 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:21.176 23:45:15 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:21.176 23:45:15 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:21.176 23:45:15 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.176 23:45:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:21.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.177 --rc genhtml_branch_coverage=1 00:05:21.177 --rc genhtml_function_coverage=1 00:05:21.177 --rc genhtml_legend=1 00:05:21.177 --rc geninfo_all_blocks=1 00:05:21.177 --rc geninfo_unexecuted_blocks=1 00:05:21.177 00:05:21.177 ' 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:21.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.177 --rc genhtml_branch_coverage=1 00:05:21.177 --rc genhtml_function_coverage=1 00:05:21.177 --rc genhtml_legend=1 00:05:21.177 --rc geninfo_all_blocks=1 00:05:21.177 --rc geninfo_unexecuted_blocks=1 00:05:21.177 00:05:21.177 ' 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:21.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.177 --rc genhtml_branch_coverage=1 00:05:21.177 --rc genhtml_function_coverage=1 00:05:21.177 --rc genhtml_legend=1 00:05:21.177 --rc geninfo_all_blocks=1 00:05:21.177 --rc geninfo_unexecuted_blocks=1 00:05:21.177 00:05:21.177 ' 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:21.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.177 --rc genhtml_branch_coverage=1 00:05:21.177 --rc genhtml_function_coverage=1 00:05:21.177 --rc genhtml_legend=1 00:05:21.177 --rc geninfo_all_blocks=1 00:05:21.177 --rc geninfo_unexecuted_blocks=1 00:05:21.177 00:05:21.177 ' 00:05:21.177 23:45:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:21.177 23:45:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:21.177 23:45:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:21.177 23:45:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:21.177 23:45:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:21.177 23:45:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:21.177 23:45:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.177 23:45:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69708 00:05:21.177 23:45:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:21.177 23:45:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69708 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 69708 ']' 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.177 23:45:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.435 [2024-11-02 23:45:15.310197] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:21.436 [2024-11-02 23:45:15.310401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69708 ] 00:05:21.436 [2024-11-02 23:45:15.466664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.436 [2024-11-02 23:45:15.510292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.436 [2024-11-02 23:45:15.510373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.370 23:45:16 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.370 23:45:16 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:22.370 23:45:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:22.370 23:45:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69725 00:05:22.370 23:45:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:22.370 [ 00:05:22.370 "bdev_malloc_delete", 00:05:22.370 "bdev_malloc_create", 00:05:22.370 "bdev_null_resize", 00:05:22.370 "bdev_null_delete", 00:05:22.370 "bdev_null_create", 00:05:22.370 "bdev_nvme_cuse_unregister", 00:05:22.370 "bdev_nvme_cuse_register", 00:05:22.370 "bdev_opal_new_user", 00:05:22.370 "bdev_opal_set_lock_state", 00:05:22.370 "bdev_opal_delete", 00:05:22.370 "bdev_opal_get_info", 00:05:22.370 "bdev_opal_create", 00:05:22.370 "bdev_nvme_opal_revert", 00:05:22.370 "bdev_nvme_opal_init", 00:05:22.370 "bdev_nvme_send_cmd", 00:05:22.370 "bdev_nvme_set_keys", 00:05:22.370 "bdev_nvme_get_path_iostat", 00:05:22.370 "bdev_nvme_get_mdns_discovery_info", 00:05:22.370 "bdev_nvme_stop_mdns_discovery", 00:05:22.370 "bdev_nvme_start_mdns_discovery", 00:05:22.370 "bdev_nvme_set_multipath_policy", 00:05:22.370 "bdev_nvme_set_preferred_path", 00:05:22.370 "bdev_nvme_get_io_paths", 00:05:22.370 "bdev_nvme_remove_error_injection", 00:05:22.370 "bdev_nvme_add_error_injection", 00:05:22.370 "bdev_nvme_get_discovery_info", 00:05:22.370 "bdev_nvme_stop_discovery", 00:05:22.370 "bdev_nvme_start_discovery", 00:05:22.370 "bdev_nvme_get_controller_health_info", 00:05:22.370 "bdev_nvme_disable_controller", 00:05:22.370 "bdev_nvme_enable_controller", 00:05:22.370 "bdev_nvme_reset_controller", 00:05:22.370 "bdev_nvme_get_transport_statistics", 00:05:22.370 "bdev_nvme_apply_firmware", 00:05:22.370 "bdev_nvme_detach_controller", 00:05:22.370 "bdev_nvme_get_controllers", 00:05:22.370 "bdev_nvme_attach_controller", 00:05:22.370 "bdev_nvme_set_hotplug", 00:05:22.370 "bdev_nvme_set_options", 00:05:22.370 "bdev_passthru_delete", 00:05:22.370 "bdev_passthru_create", 00:05:22.370 "bdev_lvol_set_parent_bdev", 00:05:22.370 "bdev_lvol_set_parent", 00:05:22.370 "bdev_lvol_check_shallow_copy", 00:05:22.370 "bdev_lvol_start_shallow_copy", 00:05:22.370 "bdev_lvol_grow_lvstore", 00:05:22.370 "bdev_lvol_get_lvols", 00:05:22.370 "bdev_lvol_get_lvstores", 00:05:22.370 "bdev_lvol_delete", 00:05:22.370 "bdev_lvol_set_read_only", 00:05:22.370 "bdev_lvol_resize", 00:05:22.370 "bdev_lvol_decouple_parent", 00:05:22.370 "bdev_lvol_inflate", 00:05:22.370 "bdev_lvol_rename", 00:05:22.370 "bdev_lvol_clone_bdev", 00:05:22.370 "bdev_lvol_clone", 00:05:22.370 "bdev_lvol_snapshot", 00:05:22.370 "bdev_lvol_create", 00:05:22.370 "bdev_lvol_delete_lvstore", 00:05:22.370 "bdev_lvol_rename_lvstore", 00:05:22.370 "bdev_lvol_create_lvstore", 00:05:22.370 "bdev_raid_set_options", 00:05:22.370 "bdev_raid_remove_base_bdev", 00:05:22.370 "bdev_raid_add_base_bdev", 00:05:22.370 "bdev_raid_delete", 00:05:22.370 "bdev_raid_create", 00:05:22.370 "bdev_raid_get_bdevs", 00:05:22.370 "bdev_error_inject_error", 00:05:22.370 "bdev_error_delete", 00:05:22.370 "bdev_error_create", 00:05:22.370 "bdev_split_delete", 00:05:22.370 "bdev_split_create", 00:05:22.370 "bdev_delay_delete", 00:05:22.370 "bdev_delay_create", 00:05:22.370 "bdev_delay_update_latency", 00:05:22.370 "bdev_zone_block_delete", 00:05:22.370 "bdev_zone_block_create", 00:05:22.370 "blobfs_create", 00:05:22.370 "blobfs_detect", 00:05:22.370 "blobfs_set_cache_size", 00:05:22.370 "bdev_aio_delete", 00:05:22.370 "bdev_aio_rescan", 00:05:22.370 "bdev_aio_create", 00:05:22.370 "bdev_ftl_set_property", 00:05:22.370 "bdev_ftl_get_properties", 00:05:22.370 "bdev_ftl_get_stats", 00:05:22.370 "bdev_ftl_unmap", 00:05:22.370 "bdev_ftl_unload", 00:05:22.370 "bdev_ftl_delete", 00:05:22.370 "bdev_ftl_load", 00:05:22.370 "bdev_ftl_create", 00:05:22.370 "bdev_virtio_attach_controller", 00:05:22.370 "bdev_virtio_scsi_get_devices", 00:05:22.370 "bdev_virtio_detach_controller", 00:05:22.370 "bdev_virtio_blk_set_hotplug", 00:05:22.370 "bdev_iscsi_delete", 00:05:22.370 "bdev_iscsi_create", 00:05:22.370 "bdev_iscsi_set_options", 00:05:22.370 "accel_error_inject_error", 00:05:22.370 "ioat_scan_accel_module", 00:05:22.370 "dsa_scan_accel_module", 00:05:22.370 "iaa_scan_accel_module", 00:05:22.370 "keyring_file_remove_key", 00:05:22.370 "keyring_file_add_key", 00:05:22.370 "keyring_linux_set_options", 00:05:22.370 "fsdev_aio_delete", 00:05:22.370 "fsdev_aio_create", 00:05:22.370 "iscsi_get_histogram", 00:05:22.370 "iscsi_enable_histogram", 00:05:22.370 "iscsi_set_options", 00:05:22.370 "iscsi_get_auth_groups", 00:05:22.370 "iscsi_auth_group_remove_secret", 00:05:22.370 "iscsi_auth_group_add_secret", 00:05:22.370 "iscsi_delete_auth_group", 00:05:22.370 "iscsi_create_auth_group", 00:05:22.370 "iscsi_set_discovery_auth", 00:05:22.370 "iscsi_get_options", 00:05:22.370 "iscsi_target_node_request_logout", 00:05:22.370 "iscsi_target_node_set_redirect", 00:05:22.370 "iscsi_target_node_set_auth", 00:05:22.370 "iscsi_target_node_add_lun", 00:05:22.370 "iscsi_get_stats", 00:05:22.370 "iscsi_get_connections", 00:05:22.370 "iscsi_portal_group_set_auth", 00:05:22.370 "iscsi_start_portal_group", 00:05:22.370 "iscsi_delete_portal_group", 00:05:22.370 "iscsi_create_portal_group", 00:05:22.370 "iscsi_get_portal_groups", 00:05:22.370 "iscsi_delete_target_node", 00:05:22.370 "iscsi_target_node_remove_pg_ig_maps", 00:05:22.370 "iscsi_target_node_add_pg_ig_maps", 00:05:22.370 "iscsi_create_target_node", 00:05:22.370 "iscsi_get_target_nodes", 00:05:22.370 "iscsi_delete_initiator_group", 00:05:22.370 "iscsi_initiator_group_remove_initiators", 00:05:22.370 "iscsi_initiator_group_add_initiators", 00:05:22.370 "iscsi_create_initiator_group", 00:05:22.370 "iscsi_get_initiator_groups", 00:05:22.370 "nvmf_set_crdt", 00:05:22.370 "nvmf_set_config", 00:05:22.370 "nvmf_set_max_subsystems", 00:05:22.370 "nvmf_stop_mdns_prr", 00:05:22.370 "nvmf_publish_mdns_prr", 00:05:22.370 "nvmf_subsystem_get_listeners", 00:05:22.370 "nvmf_subsystem_get_qpairs", 00:05:22.370 "nvmf_subsystem_get_controllers", 00:05:22.370 "nvmf_get_stats", 00:05:22.370 "nvmf_get_transports", 00:05:22.370 "nvmf_create_transport", 00:05:22.370 "nvmf_get_targets", 00:05:22.370 "nvmf_delete_target", 00:05:22.370 "nvmf_create_target", 00:05:22.370 "nvmf_subsystem_allow_any_host", 00:05:22.370 "nvmf_subsystem_set_keys", 00:05:22.370 "nvmf_subsystem_remove_host", 00:05:22.370 "nvmf_subsystem_add_host", 00:05:22.370 "nvmf_ns_remove_host", 00:05:22.370 "nvmf_ns_add_host", 00:05:22.370 "nvmf_subsystem_remove_ns", 00:05:22.370 "nvmf_subsystem_set_ns_ana_group", 00:05:22.370 "nvmf_subsystem_add_ns", 00:05:22.370 "nvmf_subsystem_listener_set_ana_state", 00:05:22.370 "nvmf_discovery_get_referrals", 00:05:22.370 "nvmf_discovery_remove_referral", 00:05:22.370 "nvmf_discovery_add_referral", 00:05:22.370 "nvmf_subsystem_remove_listener", 00:05:22.370 "nvmf_subsystem_add_listener", 00:05:22.370 "nvmf_delete_subsystem", 00:05:22.370 "nvmf_create_subsystem", 00:05:22.370 "nvmf_get_subsystems", 00:05:22.370 "env_dpdk_get_mem_stats", 00:05:22.370 "nbd_get_disks", 00:05:22.370 "nbd_stop_disk", 00:05:22.370 "nbd_start_disk", 00:05:22.370 "ublk_recover_disk", 00:05:22.370 "ublk_get_disks", 00:05:22.370 "ublk_stop_disk", 00:05:22.370 "ublk_start_disk", 00:05:22.370 "ublk_destroy_target", 00:05:22.370 "ublk_create_target", 00:05:22.370 "virtio_blk_create_transport", 00:05:22.370 "virtio_blk_get_transports", 00:05:22.370 "vhost_controller_set_coalescing", 00:05:22.370 "vhost_get_controllers", 00:05:22.370 "vhost_delete_controller", 00:05:22.370 "vhost_create_blk_controller", 00:05:22.370 "vhost_scsi_controller_remove_target", 00:05:22.370 "vhost_scsi_controller_add_target", 00:05:22.370 "vhost_start_scsi_controller", 00:05:22.370 "vhost_create_scsi_controller", 00:05:22.370 "thread_set_cpumask", 00:05:22.370 "scheduler_set_options", 00:05:22.370 "framework_get_governor", 00:05:22.370 "framework_get_scheduler", 00:05:22.370 "framework_set_scheduler", 00:05:22.370 "framework_get_reactors", 00:05:22.370 "thread_get_io_channels", 00:05:22.370 "thread_get_pollers", 00:05:22.370 "thread_get_stats", 00:05:22.370 "framework_monitor_context_switch", 00:05:22.370 "spdk_kill_instance", 00:05:22.370 "log_enable_timestamps", 00:05:22.370 "log_get_flags", 00:05:22.370 "log_clear_flag", 00:05:22.370 "log_set_flag", 00:05:22.370 "log_get_level", 00:05:22.370 "log_set_level", 00:05:22.370 "log_get_print_level", 00:05:22.370 "log_set_print_level", 00:05:22.371 "framework_enable_cpumask_locks", 00:05:22.371 "framework_disable_cpumask_locks", 00:05:22.371 "framework_wait_init", 00:05:22.371 "framework_start_init", 00:05:22.371 "scsi_get_devices", 00:05:22.371 "bdev_get_histogram", 00:05:22.371 "bdev_enable_histogram", 00:05:22.371 "bdev_set_qos_limit", 00:05:22.371 "bdev_set_qd_sampling_period", 00:05:22.371 "bdev_get_bdevs", 00:05:22.371 "bdev_reset_iostat", 00:05:22.371 "bdev_get_iostat", 00:05:22.371 "bdev_examine", 00:05:22.371 "bdev_wait_for_examine", 00:05:22.371 "bdev_set_options", 00:05:22.371 "accel_get_stats", 00:05:22.371 "accel_set_options", 00:05:22.371 "accel_set_driver", 00:05:22.371 "accel_crypto_key_destroy", 00:05:22.371 "accel_crypto_keys_get", 00:05:22.371 "accel_crypto_key_create", 00:05:22.371 "accel_assign_opc", 00:05:22.371 "accel_get_module_info", 00:05:22.371 "accel_get_opc_assignments", 00:05:22.371 "vmd_rescan", 00:05:22.371 "vmd_remove_device", 00:05:22.371 "vmd_enable", 00:05:22.371 "sock_get_default_impl", 00:05:22.371 "sock_set_default_impl", 00:05:22.371 "sock_impl_set_options", 00:05:22.371 "sock_impl_get_options", 00:05:22.371 "iobuf_get_stats", 00:05:22.371 "iobuf_set_options", 00:05:22.371 "keyring_get_keys", 00:05:22.371 "framework_get_pci_devices", 00:05:22.371 "framework_get_config", 00:05:22.371 "framework_get_subsystems", 00:05:22.371 "fsdev_set_opts", 00:05:22.371 "fsdev_get_opts", 00:05:22.371 "trace_get_info", 00:05:22.371 "trace_get_tpoint_group_mask", 00:05:22.371 "trace_disable_tpoint_group", 00:05:22.371 "trace_enable_tpoint_group", 00:05:22.371 "trace_clear_tpoint_mask", 00:05:22.371 "trace_set_tpoint_mask", 00:05:22.371 "notify_get_notifications", 00:05:22.371 "notify_get_types", 00:05:22.371 "spdk_get_version", 00:05:22.371 "rpc_get_methods" 00:05:22.371 ] 00:05:22.371 23:45:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.371 23:45:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:22.371 23:45:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69708 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 69708 ']' 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 69708 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69708 00:05:22.371 killing process with pid 69708 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69708' 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 69708 00:05:22.371 23:45:16 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 69708 00:05:22.937 ************************************ 00:05:22.937 END TEST spdkcli_tcp 00:05:22.937 ************************************ 00:05:22.937 00:05:22.937 real 0m2.011s 00:05:22.937 user 0m3.214s 00:05:22.937 sys 0m0.699s 00:05:22.937 23:45:16 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.937 23:45:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.196 23:45:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.196 23:45:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.196 23:45:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.196 23:45:17 -- common/autotest_common.sh@10 -- # set +x 00:05:23.196 ************************************ 00:05:23.196 START TEST dpdk_mem_utility 00:05:23.196 ************************************ 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.196 * Looking for test storage... 00:05:23.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.196 23:45:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.196 --rc genhtml_branch_coverage=1 00:05:23.196 --rc genhtml_function_coverage=1 00:05:23.196 --rc genhtml_legend=1 00:05:23.196 --rc geninfo_all_blocks=1 00:05:23.196 --rc geninfo_unexecuted_blocks=1 00:05:23.196 00:05:23.196 ' 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.196 --rc genhtml_branch_coverage=1 00:05:23.196 --rc genhtml_function_coverage=1 00:05:23.196 --rc genhtml_legend=1 00:05:23.196 --rc geninfo_all_blocks=1 00:05:23.196 --rc geninfo_unexecuted_blocks=1 00:05:23.196 00:05:23.196 ' 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.196 --rc genhtml_branch_coverage=1 00:05:23.196 --rc genhtml_function_coverage=1 00:05:23.196 --rc genhtml_legend=1 00:05:23.196 --rc geninfo_all_blocks=1 00:05:23.196 --rc geninfo_unexecuted_blocks=1 00:05:23.196 00:05:23.196 ' 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.196 --rc genhtml_branch_coverage=1 00:05:23.196 --rc genhtml_function_coverage=1 00:05:23.196 --rc genhtml_legend=1 00:05:23.196 --rc geninfo_all_blocks=1 00:05:23.196 --rc geninfo_unexecuted_blocks=1 00:05:23.196 00:05:23.196 ' 00:05:23.196 23:45:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:23.196 23:45:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69808 00:05:23.196 23:45:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.196 23:45:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69808 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 69808 ']' 00:05:23.196 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.454 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.454 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.454 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.454 23:45:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.454 [2024-11-02 23:45:17.377982] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:23.454 [2024-11-02 23:45:17.378210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69808 ] 00:05:23.454 [2024-11-02 23:45:17.532669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.717 [2024-11-02 23:45:17.571544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.289 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.289 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:24.289 23:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:24.289 23:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:24.289 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.289 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.289 { 00:05:24.289 "filename": "/tmp/spdk_mem_dump.txt" 00:05:24.289 } 00:05:24.289 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.289 23:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.289 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:24.289 1 heaps totaling size 810.000000 MiB 00:05:24.289 size: 810.000000 MiB heap id: 0 00:05:24.289 end heaps---------- 00:05:24.289 9 mempools totaling size 595.772034 MiB 00:05:24.289 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:24.289 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:24.289 size: 92.545471 MiB name: bdev_io_69808 00:05:24.289 size: 50.003479 MiB name: msgpool_69808 00:05:24.289 size: 36.509338 MiB name: fsdev_io_69808 00:05:24.289 size: 21.763794 MiB name: PDU_Pool 00:05:24.289 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:24.289 size: 4.133484 MiB name: evtpool_69808 00:05:24.289 size: 0.026123 MiB name: Session_Pool 00:05:24.289 end mempools------- 00:05:24.289 6 memzones totaling size 4.142822 MiB 00:05:24.289 size: 1.000366 MiB name: RG_ring_0_69808 00:05:24.289 size: 1.000366 MiB name: RG_ring_1_69808 00:05:24.289 size: 1.000366 MiB name: RG_ring_4_69808 00:05:24.289 size: 1.000366 MiB name: RG_ring_5_69808 00:05:24.289 size: 0.125366 MiB name: RG_ring_2_69808 00:05:24.289 size: 0.015991 MiB name: RG_ring_3_69808 00:05:24.289 end memzones------- 00:05:24.289 23:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:24.289 heap id: 0 total size: 810.000000 MiB number of busy elements: 312 number of free elements: 15 00:05:24.289 list of free elements. size: 10.813416 MiB 00:05:24.289 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:24.289 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:24.289 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:24.289 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:24.289 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:24.289 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:24.289 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:24.289 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:24.289 element at address: 0x20001a600000 with size: 0.567871 MiB 00:05:24.289 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:24.289 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:24.289 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:24.289 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:24.289 element at address: 0x200027a00000 with size: 0.395752 MiB 00:05:24.289 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:24.289 list of standard malloc elements. size: 199.267700 MiB 00:05:24.290 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:24.290 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:24.290 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:24.290 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:24.290 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:24.290 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:24.290 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:24.290 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:24.290 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:24.290 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20001a691600 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:24.290 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:24.291 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a65500 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:24.291 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:24.292 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:24.292 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:24.292 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:24.292 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:24.292 list of memzone associated elements. size: 599.918884 MiB 00:05:24.292 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:24.292 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:24.292 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:24.292 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:24.292 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:24.292 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69808_0 00:05:24.292 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:24.292 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69808_0 00:05:24.292 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:24.292 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69808_0 00:05:24.292 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:24.292 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:24.292 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:24.292 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:24.292 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:24.292 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69808_0 00:05:24.292 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:24.292 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69808 00:05:24.292 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:24.292 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69808 00:05:24.292 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:24.292 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:24.292 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:24.292 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:24.292 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:24.292 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:24.292 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:24.292 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:24.292 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:24.292 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69808 00:05:24.292 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:24.292 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69808 00:05:24.292 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:24.292 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69808 00:05:24.292 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:24.292 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69808 00:05:24.292 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:24.292 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69808 00:05:24.292 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:24.292 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69808 00:05:24.292 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:24.292 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:24.292 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:24.292 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:24.292 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:24.292 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:24.292 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:24.292 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69808 00:05:24.292 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:24.292 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69808 00:05:24.292 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:24.292 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:24.292 element at address: 0x200027a65680 with size: 0.023743 MiB 00:05:24.292 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:24.292 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:24.292 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69808 00:05:24.292 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:05:24.292 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:24.292 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:24.292 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69808 00:05:24.292 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:24.292 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69808 00:05:24.292 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:24.292 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69808 00:05:24.292 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:05:24.292 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:24.292 23:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:24.292 23:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69808 00:05:24.292 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 69808 ']' 00:05:24.292 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 69808 00:05:24.292 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:24.292 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:24.292 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69808 00:05:24.292 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:24.292 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:24.292 killing process with pid 69808 00:05:24.292 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69808' 00:05:24.292 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 69808 00:05:24.292 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 69808 00:05:24.894 00:05:24.894 real 0m1.899s 00:05:24.894 user 0m1.684s 00:05:24.894 sys 0m0.652s 00:05:24.894 ************************************ 00:05:24.895 END TEST dpdk_mem_utility 00:05:24.895 ************************************ 00:05:24.895 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.895 23:45:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.152 23:45:19 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.152 23:45:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.152 23:45:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.152 23:45:19 -- common/autotest_common.sh@10 -- # set +x 00:05:25.152 ************************************ 00:05:25.152 START TEST event 00:05:25.152 ************************************ 00:05:25.152 23:45:19 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.152 * Looking for test storage... 00:05:25.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.152 23:45:19 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.152 23:45:19 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.152 23:45:19 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.152 23:45:19 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.152 23:45:19 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.152 23:45:19 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.152 23:45:19 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.152 23:45:19 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.152 23:45:19 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.152 23:45:19 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.152 23:45:19 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.152 23:45:19 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.152 23:45:19 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.152 23:45:19 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.152 23:45:19 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.153 23:45:19 event -- scripts/common.sh@344 -- # case "$op" in 00:05:25.153 23:45:19 event -- scripts/common.sh@345 -- # : 1 00:05:25.153 23:45:19 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.153 23:45:19 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.153 23:45:19 event -- scripts/common.sh@365 -- # decimal 1 00:05:25.153 23:45:19 event -- scripts/common.sh@353 -- # local d=1 00:05:25.153 23:45:19 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.153 23:45:19 event -- scripts/common.sh@355 -- # echo 1 00:05:25.153 23:45:19 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.153 23:45:19 event -- scripts/common.sh@366 -- # decimal 2 00:05:25.153 23:45:19 event -- scripts/common.sh@353 -- # local d=2 00:05:25.411 23:45:19 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.411 23:45:19 event -- scripts/common.sh@355 -- # echo 2 00:05:25.411 23:45:19 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.411 23:45:19 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.411 23:45:19 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.411 23:45:19 event -- scripts/common.sh@368 -- # return 0 00:05:25.411 23:45:19 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.411 23:45:19 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.411 --rc genhtml_branch_coverage=1 00:05:25.411 --rc genhtml_function_coverage=1 00:05:25.411 --rc genhtml_legend=1 00:05:25.411 --rc geninfo_all_blocks=1 00:05:25.411 --rc geninfo_unexecuted_blocks=1 00:05:25.411 00:05:25.411 ' 00:05:25.411 23:45:19 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.411 --rc genhtml_branch_coverage=1 00:05:25.411 --rc genhtml_function_coverage=1 00:05:25.411 --rc genhtml_legend=1 00:05:25.411 --rc geninfo_all_blocks=1 00:05:25.411 --rc geninfo_unexecuted_blocks=1 00:05:25.411 00:05:25.411 ' 00:05:25.411 23:45:19 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.411 --rc genhtml_branch_coverage=1 00:05:25.411 --rc genhtml_function_coverage=1 00:05:25.411 --rc genhtml_legend=1 00:05:25.411 --rc geninfo_all_blocks=1 00:05:25.411 --rc geninfo_unexecuted_blocks=1 00:05:25.411 00:05:25.411 ' 00:05:25.411 23:45:19 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.411 --rc genhtml_branch_coverage=1 00:05:25.411 --rc genhtml_function_coverage=1 00:05:25.411 --rc genhtml_legend=1 00:05:25.411 --rc geninfo_all_blocks=1 00:05:25.411 --rc geninfo_unexecuted_blocks=1 00:05:25.411 00:05:25.411 ' 00:05:25.411 23:45:19 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:25.411 23:45:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.411 23:45:19 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.411 23:45:19 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:25.411 23:45:19 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.411 23:45:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.411 ************************************ 00:05:25.411 START TEST event_perf 00:05:25.411 ************************************ 00:05:25.411 23:45:19 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.411 Running I/O for 1 seconds...[2024-11-02 23:45:19.309148] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:25.411 [2024-11-02 23:45:19.309310] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69894 ] 00:05:25.411 [2024-11-02 23:45:19.466166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.670 [2024-11-02 23:45:19.514152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.670 [2024-11-02 23:45:19.514345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.670 [2024-11-02 23:45:19.514441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.670 Running I/O for 1 seconds...[2024-11-02 23:45:19.514606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.614 00:05:26.614 lcore 0: 106915 00:05:26.614 lcore 1: 106918 00:05:26.614 lcore 2: 106918 00:05:26.615 lcore 3: 106916 00:05:26.615 done. 00:05:26.615 00:05:26.615 real 0m1.340s 00:05:26.615 user 0m4.104s 00:05:26.615 sys 0m0.112s 00:05:26.615 23:45:20 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.615 23:45:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.615 ************************************ 00:05:26.615 END TEST event_perf 00:05:26.615 ************************************ 00:05:26.615 23:45:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.615 23:45:20 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:26.615 23:45:20 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.615 23:45:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.615 ************************************ 00:05:26.615 START TEST event_reactor 00:05:26.615 ************************************ 00:05:26.615 23:45:20 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.878 [2024-11-02 23:45:20.716197] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:26.878 [2024-11-02 23:45:20.716748] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69928 ] 00:05:26.878 [2024-11-02 23:45:20.871483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.878 [2024-11-02 23:45:20.912349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.261 test_start 00:05:28.261 oneshot 00:05:28.261 tick 100 00:05:28.261 tick 100 00:05:28.261 tick 250 00:05:28.261 tick 100 00:05:28.261 tick 100 00:05:28.261 tick 100 00:05:28.261 tick 250 00:05:28.261 tick 500 00:05:28.261 tick 100 00:05:28.261 tick 100 00:05:28.261 tick 250 00:05:28.261 tick 100 00:05:28.261 tick 100 00:05:28.261 test_end 00:05:28.261 00:05:28.261 real 0m1.318s 00:05:28.261 user 0m1.122s 00:05:28.261 sys 0m0.088s 00:05:28.261 23:45:21 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.261 23:45:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:28.261 ************************************ 00:05:28.261 END TEST event_reactor 00:05:28.261 ************************************ 00:05:28.261 23:45:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.261 23:45:22 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:28.261 23:45:22 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.261 23:45:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.261 ************************************ 00:05:28.261 START TEST event_reactor_perf 00:05:28.261 ************************************ 00:05:28.261 23:45:22 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.261 [2024-11-02 23:45:22.100748] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:28.261 [2024-11-02 23:45:22.100877] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69970 ] 00:05:28.261 [2024-11-02 23:45:22.258150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.261 [2024-11-02 23:45:22.298731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.646 test_start 00:05:29.646 test_end 00:05:29.646 Performance: 394941 events per second 00:05:29.646 00:05:29.646 real 0m1.322s 00:05:29.646 user 0m1.120s 00:05:29.646 sys 0m0.095s 00:05:29.646 ************************************ 00:05:29.646 END TEST event_reactor_perf 00:05:29.646 ************************************ 00:05:29.646 23:45:23 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.646 23:45:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.646 23:45:23 event -- event/event.sh@49 -- # uname -s 00:05:29.646 23:45:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:29.646 23:45:23 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.646 23:45:23 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.646 23:45:23 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.646 23:45:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.646 ************************************ 00:05:29.646 START TEST event_scheduler 00:05:29.646 ************************************ 00:05:29.646 23:45:23 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.646 * Looking for test storage... 00:05:29.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:29.646 23:45:23 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:29.646 23:45:23 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:29.646 23:45:23 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:29.646 23:45:23 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.646 23:45:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:29.647 23:45:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.647 23:45:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:29.647 23:45:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:29.647 23:45:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.647 23:45:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:29.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.647 23:45:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.647 23:45:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.647 23:45:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.647 23:45:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:29.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.647 --rc genhtml_branch_coverage=1 00:05:29.647 --rc genhtml_function_coverage=1 00:05:29.647 --rc genhtml_legend=1 00:05:29.647 --rc geninfo_all_blocks=1 00:05:29.647 --rc geninfo_unexecuted_blocks=1 00:05:29.647 00:05:29.647 ' 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:29.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.647 --rc genhtml_branch_coverage=1 00:05:29.647 --rc genhtml_function_coverage=1 00:05:29.647 --rc genhtml_legend=1 00:05:29.647 --rc geninfo_all_blocks=1 00:05:29.647 --rc geninfo_unexecuted_blocks=1 00:05:29.647 00:05:29.647 ' 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:29.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.647 --rc genhtml_branch_coverage=1 00:05:29.647 --rc genhtml_function_coverage=1 00:05:29.647 --rc genhtml_legend=1 00:05:29.647 --rc geninfo_all_blocks=1 00:05:29.647 --rc geninfo_unexecuted_blocks=1 00:05:29.647 00:05:29.647 ' 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:29.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.647 --rc genhtml_branch_coverage=1 00:05:29.647 --rc genhtml_function_coverage=1 00:05:29.647 --rc genhtml_legend=1 00:05:29.647 --rc geninfo_all_blocks=1 00:05:29.647 --rc geninfo_unexecuted_blocks=1 00:05:29.647 00:05:29.647 ' 00:05:29.647 23:45:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:29.647 23:45:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70035 00:05:29.647 23:45:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.647 23:45:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:29.647 23:45:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70035 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 70035 ']' 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.647 23:45:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.906 [2024-11-02 23:45:23.746027] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:29.906 [2024-11-02 23:45:23.746275] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70035 ] 00:05:29.906 [2024-11-02 23:45:23.901884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.906 [2024-11-02 23:45:23.932384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.906 [2024-11-02 23:45:23.932579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.906 [2024-11-02 23:45:23.932602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.907 [2024-11-02 23:45:23.932703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:30.849 23:45:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.849 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.849 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.849 POWER: Cannot set governor of lcore 0 to performance 00:05:30.849 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.849 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.849 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:30.849 POWER: Unable to set Power Management Environment for lcore 0 00:05:30.849 [2024-11-02 23:45:24.581547] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:30.849 [2024-11-02 23:45:24.581568] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:30.849 [2024-11-02 23:45:24.581591] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.849 [2024-11-02 23:45:24.581629] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.849 [2024-11-02 23:45:24.581639] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.849 [2024-11-02 23:45:24.581647] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.849 23:45:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 [2024-11-02 23:45:24.652853] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.849 23:45:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 ************************************ 00:05:30.849 START TEST scheduler_create_thread 00:05:30.849 ************************************ 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 2 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 3 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 4 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 5 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 6 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 7 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 8 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.849 9 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.849 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.850 10 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.850 23:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.227 23:45:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.227 23:45:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.227 23:45:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.227 23:45:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.227 23:45:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.605 ************************************ 00:05:33.605 END TEST scheduler_create_thread 00:05:33.605 ************************************ 00:05:33.605 23:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.605 00:05:33.605 real 0m2.611s 00:05:33.605 user 0m0.029s 00:05:33.605 sys 0m0.007s 00:05:33.605 23:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.605 23:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.605 23:45:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:33.605 23:45:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70035 00:05:33.605 23:45:27 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 70035 ']' 00:05:33.605 23:45:27 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 70035 00:05:33.605 23:45:27 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:33.605 23:45:27 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:33.605 23:45:27 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70035 00:05:33.605 killing process with pid 70035 00:05:33.605 23:45:27 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:33.605 23:45:27 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:33.605 23:45:27 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70035' 00:05:33.605 23:45:27 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 70035 00:05:33.605 23:45:27 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 70035 00:05:33.864 [2024-11-02 23:45:27.755035] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.124 ************************************ 00:05:34.124 END TEST event_scheduler 00:05:34.124 ************************************ 00:05:34.124 00:05:34.124 real 0m4.517s 00:05:34.124 user 0m8.151s 00:05:34.124 sys 0m0.462s 00:05:34.124 23:45:27 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.124 23:45:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.124 23:45:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.124 23:45:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.124 23:45:28 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.124 23:45:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.124 23:45:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.124 ************************************ 00:05:34.124 START TEST app_repeat 00:05:34.124 ************************************ 00:05:34.124 23:45:28 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70136 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70136' 00:05:34.124 Process app_repeat pid: 70136 00:05:34.124 spdk_app_start Round 0 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.124 23:45:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70136 /var/tmp/spdk-nbd.sock 00:05:34.124 23:45:28 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 70136 ']' 00:05:34.124 23:45:28 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.124 23:45:28 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:34.124 23:45:28 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.124 23:45:28 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:34.124 23:45:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.124 [2024-11-02 23:45:28.097784] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:34.124 [2024-11-02 23:45:28.098355] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70136 ] 00:05:34.395 [2024-11-02 23:45:28.253359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.395 [2024-11-02 23:45:28.293828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.395 [2024-11-02 23:45:28.293916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.968 23:45:28 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.968 23:45:28 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:34.968 23:45:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.227 Malloc0 00:05:35.227 23:45:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.487 Malloc1 00:05:35.487 23:45:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.487 23:45:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.487 /dev/nbd0 00:05:35.746 23:45:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.746 23:45:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.746 1+0 records in 00:05:35.746 1+0 records out 00:05:35.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055718 s, 7.4 MB/s 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:35.746 23:45:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:35.746 23:45:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.746 23:45:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.746 23:45:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.746 /dev/nbd1 00:05:35.746 23:45:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.006 23:45:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.006 1+0 records in 00:05:36.006 1+0 records out 00:05:36.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389706 s, 10.5 MB/s 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:36.006 23:45:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:36.006 23:45:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.006 23:45:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.006 23:45:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.006 23:45:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.006 23:45:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.006 23:45:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.006 { 00:05:36.006 "nbd_device": "/dev/nbd0", 00:05:36.006 "bdev_name": "Malloc0" 00:05:36.006 }, 00:05:36.006 { 00:05:36.006 "nbd_device": "/dev/nbd1", 00:05:36.006 "bdev_name": "Malloc1" 00:05:36.006 } 00:05:36.006 ]' 00:05:36.006 23:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.006 { 00:05:36.006 "nbd_device": "/dev/nbd0", 00:05:36.006 "bdev_name": "Malloc0" 00:05:36.006 }, 00:05:36.006 { 00:05:36.006 "nbd_device": "/dev/nbd1", 00:05:36.006 "bdev_name": "Malloc1" 00:05:36.006 } 00:05:36.006 ]' 00:05:36.006 23:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.266 /dev/nbd1' 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.266 /dev/nbd1' 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.266 23:45:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.267 256+0 records in 00:05:36.267 256+0 records out 00:05:36.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473761 s, 221 MB/s 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.267 256+0 records in 00:05:36.267 256+0 records out 00:05:36.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248267 s, 42.2 MB/s 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.267 256+0 records in 00:05:36.267 256+0 records out 00:05:36.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272251 s, 38.5 MB/s 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.267 23:45:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.526 23:45:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.526 23:45:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.526 23:45:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.526 23:45:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.526 23:45:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.526 23:45:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.526 23:45:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.526 23:45:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.526 23:45:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.526 23:45:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.794 23:45:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.794 23:45:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.794 23:45:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.794 23:45:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.795 23:45:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.795 23:45:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.795 23:45:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.795 23:45:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.795 23:45:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.795 23:45:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.795 23:45:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.055 23:45:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.055 23:45:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.315 23:45:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.315 [2024-11-02 23:45:31.307398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.315 [2024-11-02 23:45:31.333716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.315 [2024-11-02 23:45:31.333717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.315 [2024-11-02 23:45:31.376794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.315 [2024-11-02 23:45:31.376853] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.681 spdk_app_start Round 1 00:05:40.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.681 23:45:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.681 23:45:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.681 23:45:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70136 /var/tmp/spdk-nbd.sock 00:05:40.681 23:45:34 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 70136 ']' 00:05:40.681 23:45:34 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.681 23:45:34 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.681 23:45:34 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.681 23:45:34 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.681 23:45:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.681 23:45:34 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.681 23:45:34 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:40.681 23:45:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.681 Malloc0 00:05:40.681 23:45:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.941 Malloc1 00:05:40.941 23:45:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.941 23:45:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.941 /dev/nbd0 00:05:40.941 23:45:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.201 23:45:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.202 1+0 records in 00:05:41.202 1+0 records out 00:05:41.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400925 s, 10.2 MB/s 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:41.202 23:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.202 23:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.202 23:45:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.202 /dev/nbd1 00:05:41.202 23:45:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.202 23:45:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.202 1+0 records in 00:05:41.202 1+0 records out 00:05:41.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037003 s, 11.1 MB/s 00:05:41.202 23:45:35 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.461 23:45:35 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:41.461 23:45:35 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.461 23:45:35 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:41.461 23:45:35 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:41.461 23:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.461 23:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.461 23:45:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.461 23:45:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.461 23:45:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.461 23:45:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.461 { 00:05:41.461 "nbd_device": "/dev/nbd0", 00:05:41.461 "bdev_name": "Malloc0" 00:05:41.461 }, 00:05:41.461 { 00:05:41.461 "nbd_device": "/dev/nbd1", 00:05:41.461 "bdev_name": "Malloc1" 00:05:41.461 } 00:05:41.461 ]' 00:05:41.461 23:45:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.461 { 00:05:41.461 "nbd_device": "/dev/nbd0", 00:05:41.461 "bdev_name": "Malloc0" 00:05:41.461 }, 00:05:41.461 { 00:05:41.461 "nbd_device": "/dev/nbd1", 00:05:41.461 "bdev_name": "Malloc1" 00:05:41.461 } 00:05:41.461 ]' 00:05:41.461 23:45:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.461 23:45:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.461 /dev/nbd1' 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.721 /dev/nbd1' 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.721 256+0 records in 00:05:41.721 256+0 records out 00:05:41.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012547 s, 83.6 MB/s 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.721 256+0 records in 00:05:41.721 256+0 records out 00:05:41.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253602 s, 41.3 MB/s 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.721 256+0 records in 00:05:41.721 256+0 records out 00:05:41.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269364 s, 38.9 MB/s 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.721 23:45:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.722 23:45:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.982 23:45:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.982 23:45:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.982 23:45:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.982 23:45:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.982 23:45:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.982 23:45:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.982 23:45:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.982 23:45:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.982 23:45:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.982 23:45:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.242 23:45:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.501 23:45:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.501 23:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.501 23:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.501 23:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.501 23:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.501 23:45:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.501 23:45:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.501 23:45:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.501 23:45:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.501 23:45:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.501 23:45:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.761 [2024-11-02 23:45:36.701181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.761 [2024-11-02 23:45:36.725654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.761 [2024-11-02 23:45:36.725690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.761 [2024-11-02 23:45:36.767572] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.761 [2024-11-02 23:45:36.767634] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.054 spdk_app_start Round 2 00:05:46.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.054 23:45:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.054 23:45:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:46.054 23:45:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70136 /var/tmp/spdk-nbd.sock 00:05:46.054 23:45:39 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 70136 ']' 00:05:46.054 23:45:39 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.054 23:45:39 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.054 23:45:39 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.054 23:45:39 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.054 23:45:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.054 23:45:39 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.054 23:45:39 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:46.054 23:45:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.054 Malloc0 00:05:46.054 23:45:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.315 Malloc1 00:05:46.315 23:45:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.315 23:45:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.574 /dev/nbd0 00:05:46.574 23:45:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.574 23:45:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.574 1+0 records in 00:05:46.574 1+0 records out 00:05:46.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422856 s, 9.7 MB/s 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:46.574 23:45:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:46.574 23:45:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.574 23:45:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.574 23:45:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.834 /dev/nbd1 00:05:46.834 23:45:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.834 23:45:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.834 23:45:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:46.834 23:45:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:46.834 23:45:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:46.834 23:45:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:46.834 23:45:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:46.834 23:45:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:46.834 23:45:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:46.834 23:45:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:46.834 23:45:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.834 1+0 records in 00:05:46.834 1+0 records out 00:05:46.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389046 s, 10.5 MB/s 00:05:46.834 23:45:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.835 23:45:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:46.835 23:45:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.835 23:45:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:46.835 23:45:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:46.835 23:45:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.835 23:45:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.835 23:45:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.835 23:45:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.835 23:45:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.094 23:45:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.094 { 00:05:47.094 "nbd_device": "/dev/nbd0", 00:05:47.094 "bdev_name": "Malloc0" 00:05:47.094 }, 00:05:47.094 { 00:05:47.094 "nbd_device": "/dev/nbd1", 00:05:47.094 "bdev_name": "Malloc1" 00:05:47.094 } 00:05:47.094 ]' 00:05:47.094 23:45:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.094 23:45:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.094 { 00:05:47.094 "nbd_device": "/dev/nbd0", 00:05:47.094 "bdev_name": "Malloc0" 00:05:47.094 }, 00:05:47.094 { 00:05:47.094 "nbd_device": "/dev/nbd1", 00:05:47.094 "bdev_name": "Malloc1" 00:05:47.094 } 00:05:47.094 ]' 00:05:47.094 23:45:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.094 /dev/nbd1' 00:05:47.094 23:45:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.094 /dev/nbd1' 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.095 256+0 records in 00:05:47.095 256+0 records out 00:05:47.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012428 s, 84.4 MB/s 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.095 256+0 records in 00:05:47.095 256+0 records out 00:05:47.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208178 s, 50.4 MB/s 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.095 256+0 records in 00:05:47.095 256+0 records out 00:05:47.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150412 s, 69.7 MB/s 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.095 23:45:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.355 23:45:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.355 23:45:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.355 23:45:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.355 23:45:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.355 23:45:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.355 23:45:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.355 23:45:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.355 23:45:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.355 23:45:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.355 23:45:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.614 23:45:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.873 23:45:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.873 23:45:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.133 23:45:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.133 [2024-11-02 23:45:42.160116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.133 [2024-11-02 23:45:42.184148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.133 [2024-11-02 23:45:42.184150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.392 [2024-11-02 23:45:42.227280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.392 [2024-11-02 23:45:42.227357] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.704 23:45:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70136 /var/tmp/spdk-nbd.sock 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 70136 ']' 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:51.704 23:45:45 event.app_repeat -- event/event.sh@39 -- # killprocess 70136 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 70136 ']' 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 70136 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70136 00:05:51.704 killing process with pid 70136 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70136' 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@971 -- # kill 70136 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@976 -- # wait 70136 00:05:51.704 spdk_app_start is called in Round 0. 00:05:51.704 Shutdown signal received, stop current app iteration 00:05:51.704 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 reinitialization... 00:05:51.704 spdk_app_start is called in Round 1. 00:05:51.704 Shutdown signal received, stop current app iteration 00:05:51.704 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 reinitialization... 00:05:51.704 spdk_app_start is called in Round 2. 00:05:51.704 Shutdown signal received, stop current app iteration 00:05:51.704 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 reinitialization... 00:05:51.704 spdk_app_start is called in Round 3. 00:05:51.704 Shutdown signal received, stop current app iteration 00:05:51.704 23:45:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.704 23:45:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.704 00:05:51.704 real 0m17.514s 00:05:51.704 user 0m38.783s 00:05:51.704 sys 0m2.532s 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.704 23:45:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.704 ************************************ 00:05:51.704 END TEST app_repeat 00:05:51.704 ************************************ 00:05:51.704 23:45:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.705 23:45:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.705 23:45:45 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.705 23:45:45 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.705 23:45:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.705 ************************************ 00:05:51.705 START TEST cpu_locks 00:05:51.705 ************************************ 00:05:51.705 23:45:45 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.705 * Looking for test storage... 00:05:51.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.705 23:45:45 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:51.705 23:45:45 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:51.705 23:45:45 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:51.965 23:45:45 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.965 23:45:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:51.965 23:45:45 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.965 23:45:45 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:51.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.965 --rc genhtml_branch_coverage=1 00:05:51.965 --rc genhtml_function_coverage=1 00:05:51.965 --rc genhtml_legend=1 00:05:51.965 --rc geninfo_all_blocks=1 00:05:51.965 --rc geninfo_unexecuted_blocks=1 00:05:51.965 00:05:51.965 ' 00:05:51.965 23:45:45 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:51.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.965 --rc genhtml_branch_coverage=1 00:05:51.965 --rc genhtml_function_coverage=1 00:05:51.965 --rc genhtml_legend=1 00:05:51.965 --rc geninfo_all_blocks=1 00:05:51.965 --rc geninfo_unexecuted_blocks=1 00:05:51.965 00:05:51.965 ' 00:05:51.965 23:45:45 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:51.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.965 --rc genhtml_branch_coverage=1 00:05:51.965 --rc genhtml_function_coverage=1 00:05:51.965 --rc genhtml_legend=1 00:05:51.965 --rc geninfo_all_blocks=1 00:05:51.965 --rc geninfo_unexecuted_blocks=1 00:05:51.965 00:05:51.965 ' 00:05:51.965 23:45:45 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:51.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.965 --rc genhtml_branch_coverage=1 00:05:51.965 --rc genhtml_function_coverage=1 00:05:51.965 --rc genhtml_legend=1 00:05:51.965 --rc geninfo_all_blocks=1 00:05:51.965 --rc geninfo_unexecuted_blocks=1 00:05:51.965 00:05:51.965 ' 00:05:51.965 23:45:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.965 23:45:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.965 23:45:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.965 23:45:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.965 23:45:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.965 23:45:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.965 23:45:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.965 ************************************ 00:05:51.965 START TEST default_locks 00:05:51.965 ************************************ 00:05:51.965 23:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:51.965 23:45:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70561 00:05:51.965 23:45:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.965 23:45:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70561 00:05:51.965 23:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 70561 ']' 00:05:51.965 23:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.965 23:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.966 23:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.966 23:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.966 23:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.966 [2024-11-02 23:45:45.954197] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:51.966 [2024-11-02 23:45:45.954332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70561 ] 00:05:52.225 [2024-11-02 23:45:46.110800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.225 [2024-11-02 23:45:46.155487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.795 23:45:46 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.795 23:45:46 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:52.795 23:45:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70561 00:05:52.795 23:45:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.795 23:45:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70561 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70561 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 70561 ']' 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 70561 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70561 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:53.365 killing process with pid 70561 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70561' 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 70561 00:05:53.365 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 70561 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70561 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70561 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70561 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 70561 ']' 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.935 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (70561) - No such process 00:05:53.935 ERROR: process (pid: 70561) is no longer running 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.935 00:05:53.935 real 0m2.061s 00:05:53.935 user 0m1.908s 00:05:53.935 sys 0m0.767s 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.935 23:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.935 ************************************ 00:05:53.935 END TEST default_locks 00:05:53.935 ************************************ 00:05:53.935 23:45:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:53.935 23:45:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.935 23:45:47 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.935 23:45:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.935 ************************************ 00:05:53.935 START TEST default_locks_via_rpc 00:05:53.935 ************************************ 00:05:53.935 23:45:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:53.935 23:45:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70614 00:05:53.935 23:45:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.935 23:45:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70614 00:05:53.935 23:45:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 70614 ']' 00:05:53.935 23:45:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.935 23:45:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.935 23:45:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.935 23:45:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.935 23:45:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.197 [2024-11-02 23:45:48.083602] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:54.197 [2024-11-02 23:45:48.083731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70614 ] 00:05:54.197 [2024-11-02 23:45:48.239561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.197 [2024-11-02 23:45:48.280910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70614 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70614 00:05:55.136 23:45:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70614 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 70614 ']' 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 70614 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70614 00:05:55.136 killing process with pid 70614 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70614' 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 70614 00:05:55.136 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 70614 00:05:56.074 00:05:56.074 real 0m1.846s 00:05:56.074 user 0m1.668s 00:05:56.074 sys 0m0.670s 00:05:56.074 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.074 23:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.074 ************************************ 00:05:56.074 END TEST default_locks_via_rpc 00:05:56.074 ************************************ 00:05:56.074 23:45:49 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.074 23:45:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.074 23:45:49 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.074 23:45:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.074 ************************************ 00:05:56.074 START TEST non_locking_app_on_locked_coremask 00:05:56.074 ************************************ 00:05:56.074 23:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:56.074 23:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70668 00:05:56.074 23:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.074 23:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70668 /var/tmp/spdk.sock 00:05:56.074 23:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 70668 ']' 00:05:56.074 23:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.074 23:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.074 23:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.074 23:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.074 23:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.074 [2024-11-02 23:45:49.995591] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:56.074 [2024-11-02 23:45:49.995721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70668 ] 00:05:56.074 [2024-11-02 23:45:50.153027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.332 [2024-11-02 23:45:50.196517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70684 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70684 /var/tmp/spdk2.sock 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 70684 ']' 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.914 23:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.914 [2024-11-02 23:45:50.882008] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:05:56.914 [2024-11-02 23:45:50.882172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70684 ] 00:05:57.174 [2024-11-02 23:45:51.034926] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.174 [2024-11-02 23:45:51.035026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.174 [2024-11-02 23:45:51.125430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.742 23:45:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.742 23:45:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:57.742 23:45:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70668 00:05:57.742 23:45:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70668 00:05:57.742 23:45:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70668 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 70668 ']' 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 70668 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70668 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:58.317 killing process with pid 70668 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70668' 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 70668 00:05:58.317 23:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 70668 00:05:59.260 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70684 00:05:59.260 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 70684 ']' 00:05:59.260 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 70684 00:05:59.519 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:59.519 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:59.519 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70684 00:05:59.519 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:59.519 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:59.519 killing process with pid 70684 00:05:59.519 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70684' 00:05:59.519 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 70684 00:05:59.519 23:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 70684 00:06:00.089 00:06:00.089 real 0m4.097s 00:06:00.089 user 0m3.973s 00:06:00.089 sys 0m1.248s 00:06:00.089 23:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.089 23:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.089 ************************************ 00:06:00.089 END TEST non_locking_app_on_locked_coremask 00:06:00.089 ************************************ 00:06:00.089 23:45:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.089 23:45:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.089 23:45:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.089 23:45:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.089 ************************************ 00:06:00.089 START TEST locking_app_on_unlocked_coremask 00:06:00.089 ************************************ 00:06:00.089 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:00.089 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70754 00:06:00.089 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.089 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70754 /var/tmp/spdk.sock 00:06:00.089 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 70754 ']' 00:06:00.089 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.089 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.089 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.089 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.089 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.089 [2024-11-02 23:45:54.159586] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:00.089 [2024-11-02 23:45:54.159716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70754 ] 00:06:00.349 [2024-11-02 23:45:54.313415] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.349 [2024-11-02 23:45:54.313496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.349 [2024-11-02 23:45:54.356817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70770 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70770 /var/tmp/spdk2.sock 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 70770 ']' 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.919 23:45:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.179 [2024-11-02 23:45:55.048761] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:01.179 [2024-11-02 23:45:55.048886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70770 ] 00:06:01.179 [2024-11-02 23:45:55.201800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.453 [2024-11-02 23:45:55.293362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.023 23:45:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:02.023 23:45:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:02.023 23:45:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70770 00:06:02.023 23:45:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70770 00:06:02.023 23:45:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70754 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 70754 ']' 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 70754 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70754 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.283 killing process with pid 70754 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70754' 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 70754 00:06:02.283 23:45:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 70754 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70770 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 70770 ']' 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 70770 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70770 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:03.662 killing process with pid 70770 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70770' 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 70770 00:06:03.662 23:45:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 70770 00:06:04.232 00:06:04.232 real 0m4.123s 00:06:04.232 user 0m3.986s 00:06:04.232 sys 0m1.269s 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.232 ************************************ 00:06:04.232 END TEST locking_app_on_unlocked_coremask 00:06:04.232 ************************************ 00:06:04.232 23:45:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.232 23:45:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:04.232 23:45:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.232 23:45:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.232 ************************************ 00:06:04.232 START TEST locking_app_on_locked_coremask 00:06:04.232 ************************************ 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70839 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70839 /var/tmp/spdk.sock 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 70839 ']' 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:04.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:04.232 23:45:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.491 [2024-11-02 23:45:58.360393] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:04.491 [2024-11-02 23:45:58.360549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70839 ] 00:06:04.491 [2024-11-02 23:45:58.505806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.491 [2024-11-02 23:45:58.546661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.060 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:05.060 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:05.320 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70855 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70855 /var/tmp/spdk2.sock 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70855 /var/tmp/spdk2.sock 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70855 /var/tmp/spdk2.sock 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 70855 ']' 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:05.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:05.321 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.321 [2024-11-02 23:45:59.245646] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:05.321 [2024-11-02 23:45:59.245811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70855 ] 00:06:05.321 [2024-11-02 23:45:59.397633] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70839 has claimed it. 00:06:05.321 [2024-11-02 23:45:59.397712] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.897 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (70855) - No such process 00:06:05.897 ERROR: process (pid: 70855) is no longer running 00:06:05.897 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:05.897 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:05.897 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:05.897 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.897 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.897 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.897 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70839 00:06:05.897 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70839 00:06:05.897 23:45:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70839 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 70839 ']' 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 70839 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70839 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:06.156 killing process with pid 70839 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70839' 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 70839 00:06:06.156 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 70839 00:06:06.753 00:06:06.753 real 0m2.263s 00:06:06.753 user 0m2.269s 00:06:06.753 sys 0m0.785s 00:06:06.753 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:06.753 23:46:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.753 ************************************ 00:06:06.753 END TEST locking_app_on_locked_coremask 00:06:06.753 ************************************ 00:06:06.753 23:46:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.753 23:46:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:06.753 23:46:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.753 23:46:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.753 ************************************ 00:06:06.753 START TEST locking_overlapped_coremask 00:06:06.753 ************************************ 00:06:06.753 23:46:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:06.753 23:46:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70897 00:06:06.753 23:46:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70897 /var/tmp/spdk.sock 00:06:06.753 23:46:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.753 23:46:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 70897 ']' 00:06:06.753 23:46:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.753 23:46:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:06.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.753 23:46:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.753 23:46:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:06.753 23:46:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.753 [2024-11-02 23:46:00.679192] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:06.753 [2024-11-02 23:46:00.679333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70897 ] 00:06:06.753 [2024-11-02 23:46:00.815996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.753 [2024-11-02 23:46:00.844193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.753 [2024-11-02 23:46:00.844303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.753 [2024-11-02 23:46:00.844408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70915 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70915 /var/tmp/spdk2.sock 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70915 /var/tmp/spdk2.sock 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70915 /var/tmp/spdk2.sock 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 70915 ']' 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.704 23:46:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.704 [2024-11-02 23:46:01.582505] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:07.704 [2024-11-02 23:46:01.582669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70915 ] 00:06:07.704 [2024-11-02 23:46:01.732204] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70897 has claimed it. 00:06:07.704 [2024-11-02 23:46:01.732274] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.272 ERROR: process (pid: 70915) is no longer running 00:06:08.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (70915) - No such process 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70897 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 70897 ']' 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 70897 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70897 00:06:08.272 killing process with pid 70897 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70897' 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 70897 00:06:08.272 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 70897 00:06:08.532 ************************************ 00:06:08.532 END TEST locking_overlapped_coremask 00:06:08.532 ************************************ 00:06:08.532 00:06:08.532 real 0m2.023s 00:06:08.532 user 0m5.496s 00:06:08.532 sys 0m0.494s 00:06:08.532 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.532 23:46:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.792 23:46:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:08.792 23:46:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.792 23:46:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.792 23:46:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.792 ************************************ 00:06:08.792 START TEST locking_overlapped_coremask_via_rpc 00:06:08.792 ************************************ 00:06:08.792 23:46:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:08.792 23:46:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70957 00:06:08.792 23:46:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70957 /var/tmp/spdk.sock 00:06:08.792 23:46:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:08.792 23:46:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 70957 ']' 00:06:08.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.792 23:46:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.792 23:46:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.792 23:46:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.792 23:46:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.792 23:46:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.792 [2024-11-02 23:46:02.789321] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:08.792 [2024-11-02 23:46:02.789483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70957 ] 00:06:09.052 [2024-11-02 23:46:02.947523] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.052 [2024-11-02 23:46:02.947647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.052 [2024-11-02 23:46:02.975939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.052 [2024-11-02 23:46:02.976040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.052 [2024-11-02 23:46:02.976212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70975 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 70975 /var/tmp/spdk2.sock 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 70975 ']' 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.621 23:46:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.621 [2024-11-02 23:46:03.652689] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:09.622 [2024-11-02 23:46:03.652914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70975 ] 00:06:09.881 [2024-11-02 23:46:03.801994] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.881 [2024-11-02 23:46:03.802045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.881 [2024-11-02 23:46:03.863404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.881 [2024-11-02 23:46:03.863417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.881 [2024-11-02 23:46:03.863480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.450 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.451 [2024-11-02 23:46:04.522929] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70957 has claimed it. 00:06:10.451 request: 00:06:10.451 { 00:06:10.451 "method": "framework_enable_cpumask_locks", 00:06:10.451 "req_id": 1 00:06:10.451 } 00:06:10.451 Got JSON-RPC error response 00:06:10.451 response: 00:06:10.451 { 00:06:10.451 "code": -32603, 00:06:10.451 "message": "Failed to claim CPU core: 2" 00:06:10.451 } 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70957 /var/tmp/spdk.sock 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 70957 ']' 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:10.451 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.711 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.711 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:10.711 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 70975 /var/tmp/spdk2.sock 00:06:10.711 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 70975 ']' 00:06:10.711 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.711 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:10.711 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.711 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:10.711 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.971 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.971 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:10.971 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.971 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.971 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.971 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.971 00:06:10.971 real 0m2.264s 00:06:10.971 user 0m1.051s 00:06:10.971 sys 0m0.146s 00:06:10.971 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.971 23:46:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.971 ************************************ 00:06:10.971 END TEST locking_overlapped_coremask_via_rpc 00:06:10.971 ************************************ 00:06:10.971 23:46:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.971 23:46:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70957 ]] 00:06:10.971 23:46:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70957 00:06:10.971 23:46:04 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 70957 ']' 00:06:10.971 23:46:04 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 70957 00:06:10.971 23:46:04 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:10.971 23:46:05 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:10.971 23:46:05 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70957 00:06:10.971 killing process with pid 70957 00:06:10.971 23:46:05 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:10.971 23:46:05 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:10.971 23:46:05 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70957' 00:06:10.971 23:46:05 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 70957 00:06:10.971 23:46:05 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 70957 00:06:11.540 23:46:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70975 ]] 00:06:11.540 23:46:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70975 00:06:11.540 23:46:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 70975 ']' 00:06:11.540 23:46:05 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 70975 00:06:11.540 23:46:05 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:11.540 23:46:05 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:11.540 23:46:05 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70975 00:06:11.540 killing process with pid 70975 00:06:11.540 23:46:05 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:11.540 23:46:05 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:11.540 23:46:05 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70975' 00:06:11.540 23:46:05 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 70975 00:06:11.540 23:46:05 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 70975 00:06:11.800 23:46:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.800 23:46:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:11.800 23:46:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70957 ]] 00:06:11.800 23:46:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70957 00:06:11.800 23:46:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 70957 ']' 00:06:11.800 23:46:05 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 70957 00:06:11.800 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (70957) - No such process 00:06:11.800 Process with pid 70957 is not found 00:06:11.800 23:46:05 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 70957 is not found' 00:06:11.800 23:46:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70975 ]] 00:06:11.800 23:46:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70975 00:06:11.800 23:46:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 70975 ']' 00:06:11.800 Process with pid 70975 is not found 00:06:11.800 23:46:05 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 70975 00:06:11.800 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (70975) - No such process 00:06:11.800 23:46:05 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 70975 is not found' 00:06:11.800 23:46:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.800 00:06:11.800 real 0m20.200s 00:06:11.800 user 0m31.612s 00:06:11.800 sys 0m6.459s 00:06:11.800 23:46:05 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:11.800 23:46:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.800 ************************************ 00:06:11.800 END TEST cpu_locks 00:06:11.800 ************************************ 00:06:11.800 ************************************ 00:06:11.800 END TEST event 00:06:11.800 ************************************ 00:06:11.800 00:06:11.800 real 0m46.859s 00:06:11.800 user 1m25.154s 00:06:11.800 sys 0m10.142s 00:06:11.800 23:46:05 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:11.800 23:46:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.059 23:46:05 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:12.059 23:46:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:12.059 23:46:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:12.059 23:46:05 -- common/autotest_common.sh@10 -- # set +x 00:06:12.059 ************************************ 00:06:12.059 START TEST thread 00:06:12.059 ************************************ 00:06:12.059 23:46:05 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:12.059 * Looking for test storage... 00:06:12.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:12.059 23:46:06 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:12.059 23:46:06 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:12.060 23:46:06 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:12.060 23:46:06 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:12.060 23:46:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.060 23:46:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.319 23:46:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.319 23:46:06 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.319 23:46:06 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.319 23:46:06 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.319 23:46:06 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.319 23:46:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.319 23:46:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.319 23:46:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.319 23:46:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.319 23:46:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:12.319 23:46:06 thread -- scripts/common.sh@345 -- # : 1 00:06:12.319 23:46:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.319 23:46:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.319 23:46:06 thread -- scripts/common.sh@365 -- # decimal 1 00:06:12.319 23:46:06 thread -- scripts/common.sh@353 -- # local d=1 00:06:12.319 23:46:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.319 23:46:06 thread -- scripts/common.sh@355 -- # echo 1 00:06:12.319 23:46:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.319 23:46:06 thread -- scripts/common.sh@366 -- # decimal 2 00:06:12.319 23:46:06 thread -- scripts/common.sh@353 -- # local d=2 00:06:12.319 23:46:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.319 23:46:06 thread -- scripts/common.sh@355 -- # echo 2 00:06:12.319 23:46:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.319 23:46:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.319 23:46:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.319 23:46:06 thread -- scripts/common.sh@368 -- # return 0 00:06:12.319 23:46:06 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.319 23:46:06 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:12.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.319 --rc genhtml_branch_coverage=1 00:06:12.319 --rc genhtml_function_coverage=1 00:06:12.319 --rc genhtml_legend=1 00:06:12.319 --rc geninfo_all_blocks=1 00:06:12.319 --rc geninfo_unexecuted_blocks=1 00:06:12.319 00:06:12.319 ' 00:06:12.319 23:46:06 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:12.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.319 --rc genhtml_branch_coverage=1 00:06:12.319 --rc genhtml_function_coverage=1 00:06:12.319 --rc genhtml_legend=1 00:06:12.319 --rc geninfo_all_blocks=1 00:06:12.319 --rc geninfo_unexecuted_blocks=1 00:06:12.319 00:06:12.319 ' 00:06:12.319 23:46:06 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:12.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.319 --rc genhtml_branch_coverage=1 00:06:12.319 --rc genhtml_function_coverage=1 00:06:12.319 --rc genhtml_legend=1 00:06:12.319 --rc geninfo_all_blocks=1 00:06:12.319 --rc geninfo_unexecuted_blocks=1 00:06:12.319 00:06:12.319 ' 00:06:12.319 23:46:06 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:12.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.319 --rc genhtml_branch_coverage=1 00:06:12.319 --rc genhtml_function_coverage=1 00:06:12.319 --rc genhtml_legend=1 00:06:12.319 --rc geninfo_all_blocks=1 00:06:12.319 --rc geninfo_unexecuted_blocks=1 00:06:12.319 00:06:12.319 ' 00:06:12.319 23:46:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.319 23:46:06 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:12.319 23:46:06 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:12.319 23:46:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.319 ************************************ 00:06:12.319 START TEST thread_poller_perf 00:06:12.319 ************************************ 00:06:12.319 23:46:06 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.319 [2024-11-02 23:46:06.221463] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:12.319 [2024-11-02 23:46:06.221650] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71113 ] 00:06:12.319 [2024-11-02 23:46:06.378119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.319 [2024-11-02 23:46:06.402529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.319 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:13.714 [2024-11-02T23:46:07.809Z] ====================================== 00:06:13.714 [2024-11-02T23:46:07.809Z] busy:2301876924 (cyc) 00:06:13.714 [2024-11-02T23:46:07.809Z] total_run_count: 421000 00:06:13.714 [2024-11-02T23:46:07.809Z] tsc_hz: 2290000000 (cyc) 00:06:13.714 [2024-11-02T23:46:07.809Z] ====================================== 00:06:13.714 [2024-11-02T23:46:07.809Z] poller_cost: 5467 (cyc), 2387 (nsec) 00:06:13.714 00:06:13.714 real 0m1.288s 00:06:13.714 user 0m1.111s 00:06:13.714 sys 0m0.072s 00:06:13.714 ************************************ 00:06:13.714 END TEST thread_poller_perf 00:06:13.714 ************************************ 00:06:13.714 23:46:07 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.714 23:46:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.714 23:46:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.714 23:46:07 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:13.714 23:46:07 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:13.714 23:46:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.714 ************************************ 00:06:13.714 START TEST thread_poller_perf 00:06:13.714 ************************************ 00:06:13.714 23:46:07 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.714 [2024-11-02 23:46:07.584703] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:13.714 [2024-11-02 23:46:07.584900] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71144 ] 00:06:13.714 [2024-11-02 23:46:07.739894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.714 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.714 [2024-11-02 23:46:07.764184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.094 [2024-11-02T23:46:09.189Z] ====================================== 00:06:15.094 [2024-11-02T23:46:09.189Z] busy:2293352308 (cyc) 00:06:15.094 [2024-11-02T23:46:09.189Z] total_run_count: 5459000 00:06:15.094 [2024-11-02T23:46:09.189Z] tsc_hz: 2290000000 (cyc) 00:06:15.094 [2024-11-02T23:46:09.189Z] ====================================== 00:06:15.094 [2024-11-02T23:46:09.189Z] poller_cost: 420 (cyc), 183 (nsec) 00:06:15.094 ************************************ 00:06:15.094 END TEST thread_poller_perf 00:06:15.094 ************************************ 00:06:15.094 00:06:15.094 real 0m1.284s 00:06:15.094 user 0m1.097s 00:06:15.094 sys 0m0.081s 00:06:15.094 23:46:08 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:15.094 23:46:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.094 23:46:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:15.094 ************************************ 00:06:15.094 END TEST thread 00:06:15.094 ************************************ 00:06:15.094 00:06:15.094 real 0m2.931s 00:06:15.094 user 0m2.377s 00:06:15.094 sys 0m0.357s 00:06:15.094 23:46:08 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:15.094 23:46:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.094 23:46:08 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:15.094 23:46:08 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:15.094 23:46:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:15.094 23:46:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:15.094 23:46:08 -- common/autotest_common.sh@10 -- # set +x 00:06:15.094 ************************************ 00:06:15.094 START TEST app_cmdline 00:06:15.094 ************************************ 00:06:15.094 23:46:08 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:15.094 * Looking for test storage... 00:06:15.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:15.094 23:46:09 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:15.094 23:46:09 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:15.094 23:46:09 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:15.094 23:46:09 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.094 23:46:09 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:15.094 23:46:09 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.094 23:46:09 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:15.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.094 --rc genhtml_branch_coverage=1 00:06:15.094 --rc genhtml_function_coverage=1 00:06:15.094 --rc genhtml_legend=1 00:06:15.094 --rc geninfo_all_blocks=1 00:06:15.094 --rc geninfo_unexecuted_blocks=1 00:06:15.094 00:06:15.094 ' 00:06:15.095 23:46:09 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:15.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.095 --rc genhtml_branch_coverage=1 00:06:15.095 --rc genhtml_function_coverage=1 00:06:15.095 --rc genhtml_legend=1 00:06:15.095 --rc geninfo_all_blocks=1 00:06:15.095 --rc geninfo_unexecuted_blocks=1 00:06:15.095 00:06:15.095 ' 00:06:15.095 23:46:09 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:15.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.095 --rc genhtml_branch_coverage=1 00:06:15.095 --rc genhtml_function_coverage=1 00:06:15.095 --rc genhtml_legend=1 00:06:15.095 --rc geninfo_all_blocks=1 00:06:15.095 --rc geninfo_unexecuted_blocks=1 00:06:15.095 00:06:15.095 ' 00:06:15.095 23:46:09 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:15.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.095 --rc genhtml_branch_coverage=1 00:06:15.095 --rc genhtml_function_coverage=1 00:06:15.095 --rc genhtml_legend=1 00:06:15.095 --rc geninfo_all_blocks=1 00:06:15.095 --rc geninfo_unexecuted_blocks=1 00:06:15.095 00:06:15.095 ' 00:06:15.095 23:46:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:15.095 23:46:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71230 00:06:15.095 23:46:09 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:15.095 23:46:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71230 00:06:15.095 23:46:09 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 71230 ']' 00:06:15.095 23:46:09 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.095 23:46:09 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:15.095 23:46:09 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.095 23:46:09 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:15.095 23:46:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.355 [2024-11-02 23:46:09.251386] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:15.355 [2024-11-02 23:46:09.251499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71230 ] 00:06:15.355 [2024-11-02 23:46:09.403079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.355 [2024-11-02 23:46:09.427790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:16.326 { 00:06:16.326 "version": "SPDK v25.01-pre git sha1 fa3ab7384", 00:06:16.326 "fields": { 00:06:16.326 "major": 25, 00:06:16.326 "minor": 1, 00:06:16.326 "patch": 0, 00:06:16.326 "suffix": "-pre", 00:06:16.326 "commit": "fa3ab7384" 00:06:16.326 } 00:06:16.326 } 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:16.326 23:46:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:16.326 23:46:10 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.586 request: 00:06:16.586 { 00:06:16.586 "method": "env_dpdk_get_mem_stats", 00:06:16.586 "req_id": 1 00:06:16.586 } 00:06:16.586 Got JSON-RPC error response 00:06:16.586 response: 00:06:16.586 { 00:06:16.586 "code": -32601, 00:06:16.586 "message": "Method not found" 00:06:16.586 } 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.586 23:46:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71230 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 71230 ']' 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 71230 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71230 00:06:16.586 killing process with pid 71230 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71230' 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@971 -- # kill 71230 00:06:16.586 23:46:10 app_cmdline -- common/autotest_common.sh@976 -- # wait 71230 00:06:17.153 00:06:17.153 real 0m2.018s 00:06:17.153 user 0m2.300s 00:06:17.153 sys 0m0.540s 00:06:17.153 ************************************ 00:06:17.153 END TEST app_cmdline 00:06:17.153 ************************************ 00:06:17.153 23:46:10 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.153 23:46:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.153 23:46:11 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.153 23:46:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.153 23:46:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.153 23:46:11 -- common/autotest_common.sh@10 -- # set +x 00:06:17.153 ************************************ 00:06:17.153 START TEST version 00:06:17.153 ************************************ 00:06:17.153 23:46:11 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.153 * Looking for test storage... 00:06:17.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:17.153 23:46:11 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:17.153 23:46:11 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:17.153 23:46:11 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:17.153 23:46:11 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:17.153 23:46:11 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.153 23:46:11 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.153 23:46:11 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.153 23:46:11 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.153 23:46:11 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.153 23:46:11 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.153 23:46:11 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.153 23:46:11 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.153 23:46:11 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.153 23:46:11 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.153 23:46:11 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.153 23:46:11 version -- scripts/common.sh@344 -- # case "$op" in 00:06:17.153 23:46:11 version -- scripts/common.sh@345 -- # : 1 00:06:17.153 23:46:11 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.153 23:46:11 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.153 23:46:11 version -- scripts/common.sh@365 -- # decimal 1 00:06:17.154 23:46:11 version -- scripts/common.sh@353 -- # local d=1 00:06:17.154 23:46:11 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.154 23:46:11 version -- scripts/common.sh@355 -- # echo 1 00:06:17.154 23:46:11 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.154 23:46:11 version -- scripts/common.sh@366 -- # decimal 2 00:06:17.413 23:46:11 version -- scripts/common.sh@353 -- # local d=2 00:06:17.413 23:46:11 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.413 23:46:11 version -- scripts/common.sh@355 -- # echo 2 00:06:17.413 23:46:11 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.413 23:46:11 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.413 23:46:11 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.413 23:46:11 version -- scripts/common.sh@368 -- # return 0 00:06:17.413 23:46:11 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.413 23:46:11 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:17.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.413 --rc genhtml_branch_coverage=1 00:06:17.413 --rc genhtml_function_coverage=1 00:06:17.413 --rc genhtml_legend=1 00:06:17.413 --rc geninfo_all_blocks=1 00:06:17.413 --rc geninfo_unexecuted_blocks=1 00:06:17.413 00:06:17.413 ' 00:06:17.413 23:46:11 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:17.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.413 --rc genhtml_branch_coverage=1 00:06:17.413 --rc genhtml_function_coverage=1 00:06:17.413 --rc genhtml_legend=1 00:06:17.413 --rc geninfo_all_blocks=1 00:06:17.413 --rc geninfo_unexecuted_blocks=1 00:06:17.413 00:06:17.413 ' 00:06:17.413 23:46:11 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:17.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.413 --rc genhtml_branch_coverage=1 00:06:17.413 --rc genhtml_function_coverage=1 00:06:17.413 --rc genhtml_legend=1 00:06:17.413 --rc geninfo_all_blocks=1 00:06:17.413 --rc geninfo_unexecuted_blocks=1 00:06:17.413 00:06:17.413 ' 00:06:17.413 23:46:11 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:17.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.413 --rc genhtml_branch_coverage=1 00:06:17.413 --rc genhtml_function_coverage=1 00:06:17.413 --rc genhtml_legend=1 00:06:17.413 --rc geninfo_all_blocks=1 00:06:17.413 --rc geninfo_unexecuted_blocks=1 00:06:17.413 00:06:17.413 ' 00:06:17.413 23:46:11 version -- app/version.sh@17 -- # get_header_version major 00:06:17.413 23:46:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.413 23:46:11 version -- app/version.sh@14 -- # cut -f2 00:06:17.413 23:46:11 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.413 23:46:11 version -- app/version.sh@17 -- # major=25 00:06:17.413 23:46:11 version -- app/version.sh@18 -- # get_header_version minor 00:06:17.413 23:46:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.413 23:46:11 version -- app/version.sh@14 -- # cut -f2 00:06:17.413 23:46:11 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.413 23:46:11 version -- app/version.sh@18 -- # minor=1 00:06:17.413 23:46:11 version -- app/version.sh@19 -- # get_header_version patch 00:06:17.413 23:46:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.413 23:46:11 version -- app/version.sh@14 -- # cut -f2 00:06:17.413 23:46:11 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.413 23:46:11 version -- app/version.sh@19 -- # patch=0 00:06:17.413 23:46:11 version -- app/version.sh@20 -- # get_header_version suffix 00:06:17.413 23:46:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.413 23:46:11 version -- app/version.sh@14 -- # cut -f2 00:06:17.413 23:46:11 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.413 23:46:11 version -- app/version.sh@20 -- # suffix=-pre 00:06:17.413 23:46:11 version -- app/version.sh@22 -- # version=25.1 00:06:17.413 23:46:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:17.413 23:46:11 version -- app/version.sh@28 -- # version=25.1rc0 00:06:17.413 23:46:11 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:17.413 23:46:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:17.413 23:46:11 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:17.413 23:46:11 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:17.413 ************************************ 00:06:17.413 END TEST version 00:06:17.413 ************************************ 00:06:17.413 00:06:17.413 real 0m0.322s 00:06:17.413 user 0m0.197s 00:06:17.413 sys 0m0.184s 00:06:17.413 23:46:11 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.413 23:46:11 version -- common/autotest_common.sh@10 -- # set +x 00:06:17.413 23:46:11 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:17.413 23:46:11 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:17.413 23:46:11 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:17.413 23:46:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.413 23:46:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.413 23:46:11 -- common/autotest_common.sh@10 -- # set +x 00:06:17.413 ************************************ 00:06:17.413 START TEST bdev_raid 00:06:17.413 ************************************ 00:06:17.413 23:46:11 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:17.672 * Looking for test storage... 00:06:17.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.673 23:46:11 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:17.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.673 --rc genhtml_branch_coverage=1 00:06:17.673 --rc genhtml_function_coverage=1 00:06:17.673 --rc genhtml_legend=1 00:06:17.673 --rc geninfo_all_blocks=1 00:06:17.673 --rc geninfo_unexecuted_blocks=1 00:06:17.673 00:06:17.673 ' 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:17.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.673 --rc genhtml_branch_coverage=1 00:06:17.673 --rc genhtml_function_coverage=1 00:06:17.673 --rc genhtml_legend=1 00:06:17.673 --rc geninfo_all_blocks=1 00:06:17.673 --rc geninfo_unexecuted_blocks=1 00:06:17.673 00:06:17.673 ' 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:17.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.673 --rc genhtml_branch_coverage=1 00:06:17.673 --rc genhtml_function_coverage=1 00:06:17.673 --rc genhtml_legend=1 00:06:17.673 --rc geninfo_all_blocks=1 00:06:17.673 --rc geninfo_unexecuted_blocks=1 00:06:17.673 00:06:17.673 ' 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:17.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.673 --rc genhtml_branch_coverage=1 00:06:17.673 --rc genhtml_function_coverage=1 00:06:17.673 --rc genhtml_legend=1 00:06:17.673 --rc geninfo_all_blocks=1 00:06:17.673 --rc geninfo_unexecuted_blocks=1 00:06:17.673 00:06:17.673 ' 00:06:17.673 23:46:11 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.673 23:46:11 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.673 23:46:11 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:17.673 23:46:11 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:17.673 23:46:11 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:17.673 23:46:11 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:17.673 23:46:11 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.673 23:46:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:17.673 ************************************ 00:06:17.673 START TEST raid1_resize_data_offset_test 00:06:17.673 ************************************ 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71395 00:06:17.673 Process raid pid: 71395 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71395' 00:06:17.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71395 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 71395 ']' 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.673 23:46:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.673 [2024-11-02 23:46:11.741529] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:17.673 [2024-11-02 23:46:11.741726] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.932 [2024-11-02 23:46:11.897384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.932 [2024-11-02 23:46:11.922790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.932 [2024-11-02 23:46:11.964370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:17.932 [2024-11-02 23:46:11.964483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.499 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.499 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:06:18.499 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:18.499 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.499 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.499 malloc0 00:06:18.499 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.499 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:18.499 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.499 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.758 malloc1 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.758 null0 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.758 [2024-11-02 23:46:12.634653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:18.758 [2024-11-02 23:46:12.636517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:18.758 [2024-11-02 23:46:12.636561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:18.758 [2024-11-02 23:46:12.636692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:18.758 [2024-11-02 23:46:12.636703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:18.758 [2024-11-02 23:46:12.637002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:18.758 [2024-11-02 23:46:12.637145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:18.758 [2024-11-02 23:46:12.637158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:18.758 [2024-11-02 23:46:12.637308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:18.758 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.759 [2024-11-02 23:46:12.698538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.759 malloc2 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.759 [2024-11-02 23:46:12.824616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:18.759 [2024-11-02 23:46:12.829817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.759 [2024-11-02 23:46:12.831619] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.759 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71395 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 71395 ']' 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 71395 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71395 00:06:19.017 killing process with pid 71395 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71395' 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 71395 00:06:19.017 23:46:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 71395 00:06:19.017 [2024-11-02 23:46:12.916422] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:19.017 [2024-11-02 23:46:12.917714] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:19.017 [2024-11-02 23:46:12.917783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.017 [2024-11-02 23:46:12.917800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:19.017 [2024-11-02 23:46:12.923626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:19.017 [2024-11-02 23:46:12.923918] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:19.017 [2024-11-02 23:46:12.923949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:19.276 [2024-11-02 23:46:13.132663] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:19.276 23:46:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:19.276 00:06:19.276 real 0m1.678s 00:06:19.276 user 0m1.682s 00:06:19.276 sys 0m0.428s 00:06:19.276 ************************************ 00:06:19.276 END TEST raid1_resize_data_offset_test 00:06:19.276 ************************************ 00:06:19.276 23:46:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.276 23:46:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.536 23:46:13 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:19.536 23:46:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:19.536 23:46:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.536 23:46:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:19.536 ************************************ 00:06:19.536 START TEST raid0_resize_superblock_test 00:06:19.536 ************************************ 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71446 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71446' 00:06:19.536 Process raid pid: 71446 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71446 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 71446 ']' 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.536 23:46:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.536 [2024-11-02 23:46:13.486835] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:19.536 [2024-11-02 23:46:13.487025] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.536 [2024-11-02 23:46:13.619252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.793 [2024-11-02 23:46:13.644911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.793 [2024-11-02 23:46:13.686535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:19.793 [2024-11-02 23:46:13.686647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.360 malloc0 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.360 [2024-11-02 23:46:14.440931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:20.360 [2024-11-02 23:46:14.441002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.360 [2024-11-02 23:46:14.441024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:20.360 [2024-11-02 23:46:14.441035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.360 [2024-11-02 23:46:14.443098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.360 [2024-11-02 23:46:14.443140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:20.360 pt0 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.360 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.618 d50350de-5a9a-46ad-bd61-b185e039065e 00:06:20.618 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.618 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:20.618 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.618 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.618 6a15d876-0ee7-477c-ace7-79aba9eaf15b 00:06:20.618 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.618 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.619 fa124560-af59-4abd-88b6-0870322b5d35 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.619 [2024-11-02 23:46:14.576246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6a15d876-0ee7-477c-ace7-79aba9eaf15b is claimed 00:06:20.619 [2024-11-02 23:46:14.576354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fa124560-af59-4abd-88b6-0870322b5d35 is claimed 00:06:20.619 [2024-11-02 23:46:14.576487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:20.619 [2024-11-02 23:46:14.576502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:20.619 [2024-11-02 23:46:14.576793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:20.619 [2024-11-02 23:46:14.576944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:20.619 [2024-11-02 23:46:14.576960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:20.619 [2024-11-02 23:46:14.577085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.619 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.619 [2024-11-02 23:46:14.692267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:20.877 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.877 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:20.877 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:20.877 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:20.877 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:20.877 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.877 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.878 [2024-11-02 23:46:14.724196] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:20.878 [2024-11-02 23:46:14.724275] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6a15d876-0ee7-477c-ace7-79aba9eaf15b' was resized: old size 131072, new size 204800 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.878 [2024-11-02 23:46:14.736061] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:20.878 [2024-11-02 23:46:14.736085] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fa124560-af59-4abd-88b6-0870322b5d35' was resized: old size 131072, new size 204800 00:06:20.878 [2024-11-02 23:46:14.736114] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.878 [2024-11-02 23:46:14.847960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.878 [2024-11-02 23:46:14.891674] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:20.878 [2024-11-02 23:46:14.891807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:20.878 [2024-11-02 23:46:14.891840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:20.878 [2024-11-02 23:46:14.891873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:20.878 [2024-11-02 23:46:14.892000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:20.878 [2024-11-02 23:46:14.892065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:20.878 [2024-11-02 23:46:14.892113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.878 [2024-11-02 23:46:14.903622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:20.878 [2024-11-02 23:46:14.903722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.878 [2024-11-02 23:46:14.903776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:20.878 [2024-11-02 23:46:14.903818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.878 [2024-11-02 23:46:14.905898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.878 [2024-11-02 23:46:14.905982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:20.878 [2024-11-02 23:46:14.907339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6a15d876-0ee7-477c-ace7-79aba9eaf15b 00:06:20.878 [2024-11-02 23:46:14.907439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6a15d876-0ee7-477c-ace7-79aba9eaf15b is claimed 00:06:20.878 [2024-11-02 23:46:14.907550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fa124560-af59-4abd-88b6-0870322b5d35 00:06:20.878 [2024-11-02 23:46:14.907613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fa124560-af59-4abd-88b6-0870322b5d35 is claimed 00:06:20.878 [2024-11-02 23:46:14.907731] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev fa124560-af59-4abd-88b6-0870322b5d35 (2) smaller than existing raid bdev Raid (3) 00:06:20.878 [2024-11-02 23:46:14.907814] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 6a15d876-0ee7-477c-ace7-79aba9eaf15b: File exists 00:06:20.878 [2024-11-02 23:46:14.907884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:20.878 [2024-11-02 23:46:14.907917] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:20.878 [2024-11-02 23:46:14.908143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:20.878 [2024-11-02 23:46:14.908327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:20.878 [2024-11-02 23:46:14.908369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:20.878 [2024-11-02 23:46:14.908545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:20.878 pt0 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.878 [2024-11-02 23:46:14.931977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71446 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 71446 ']' 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 71446 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:20.878 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.136 23:46:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71446 00:06:21.136 killing process with pid 71446 00:06:21.136 23:46:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:21.136 23:46:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:21.136 23:46:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71446' 00:06:21.136 23:46:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 71446 00:06:21.136 [2024-11-02 23:46:15.004790] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:21.136 [2024-11-02 23:46:15.004851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:21.136 [2024-11-02 23:46:15.004887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:21.136 [2024-11-02 23:46:15.004896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:21.136 23:46:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 71446 00:06:21.136 [2024-11-02 23:46:15.161535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:21.395 23:46:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:21.395 00:06:21.395 real 0m1.965s 00:06:21.395 user 0m2.262s 00:06:21.395 sys 0m0.479s 00:06:21.395 ************************************ 00:06:21.395 END TEST raid0_resize_superblock_test 00:06:21.395 ************************************ 00:06:21.395 23:46:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.395 23:46:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.395 23:46:15 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:21.395 23:46:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:21.395 23:46:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.395 23:46:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:21.395 ************************************ 00:06:21.395 START TEST raid1_resize_superblock_test 00:06:21.395 ************************************ 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71517 00:06:21.395 Process raid pid: 71517 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71517' 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71517 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 71517 ']' 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.395 23:46:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.654 [2024-11-02 23:46:15.522651] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:21.654 [2024-11-02 23:46:15.522816] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.654 [2024-11-02 23:46:15.656939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.654 [2024-11-02 23:46:15.682929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.654 [2024-11-02 23:46:15.724787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:21.654 [2024-11-02 23:46:15.724823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.590 malloc0 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.590 [2024-11-02 23:46:16.476354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:22.590 [2024-11-02 23:46:16.476425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:22.590 [2024-11-02 23:46:16.476445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:22.590 [2024-11-02 23:46:16.476456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:22.590 [2024-11-02 23:46:16.478540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:22.590 [2024-11-02 23:46:16.478577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:22.590 pt0 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.590 797f5c77-3184-4f55-b228-badec5fb3218 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.590 b18d13b8-df82-4882-9492-995dfbb2a35b 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.590 79301baf-69d8-4c57-948d-89d389f6d12d 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.590 [2024-11-02 23:46:16.610749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b18d13b8-df82-4882-9492-995dfbb2a35b is claimed 00:06:22.590 [2024-11-02 23:46:16.610847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 79301baf-69d8-4c57-948d-89d389f6d12d is claimed 00:06:22.590 [2024-11-02 23:46:16.610953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:22.590 [2024-11-02 23:46:16.610979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:22.590 [2024-11-02 23:46:16.611226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:22.590 [2024-11-02 23:46:16.611360] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:22.590 [2024-11-02 23:46:16.611371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:22.590 [2024-11-02 23:46:16.611491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:22.590 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.849 [2024-11-02 23:46:16.722824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.849 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.849 [2024-11-02 23:46:16.766807] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:22.849 [2024-11-02 23:46:16.766891] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b18d13b8-df82-4882-9492-995dfbb2a35b' was resized: old size 131072, new size 204800 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 [2024-11-02 23:46:16.774676] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:22.850 [2024-11-02 23:46:16.774763] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '79301baf-69d8-4c57-948d-89d389f6d12d' was resized: old size 131072, new size 204800 00:06:22.850 [2024-11-02 23:46:16.774848] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 [2024-11-02 23:46:16.886701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 [2024-11-02 23:46:16.930353] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:22.850 [2024-11-02 23:46:16.930505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:22.850 [2024-11-02 23:46:16.930551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:22.850 [2024-11-02 23:46:16.930757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:22.850 [2024-11-02 23:46:16.930952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:22.850 [2024-11-02 23:46:16.931046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:22.850 [2024-11-02 23:46:16.931099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.850 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.108 [2024-11-02 23:46:16.942317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:23.108 [2024-11-02 23:46:16.942367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.108 [2024-11-02 23:46:16.942384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:23.108 [2024-11-02 23:46:16.942399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.108 [2024-11-02 23:46:16.944578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.108 [2024-11-02 23:46:16.944616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:23.108 [2024-11-02 23:46:16.946072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b18d13b8-df82-4882-9492-995dfbb2a35b 00:06:23.108 [2024-11-02 23:46:16.946123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b18d13b8-df82-4882-9492-995dfbb2a35b is claimed 00:06:23.108 [2024-11-02 23:46:16.946197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 79301baf-69d8-4c57-948d-89d389f6d12d 00:06:23.108 [2024-11-02 23:46:16.946217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 79301baf-69d8-4c57-948d-89d389f6d12d is claimed 00:06:23.108 [2024-11-02 23:46:16.946310] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 79301baf-69d8-4c57-948d-89d389f6d12d (2) smaller than existing raid bdev Raid (3) 00:06:23.108 [2024-11-02 23:46:16.946331] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b18d13b8-df82-4882-9492-995dfbb2a35b: File exists 00:06:23.108 [2024-11-02 23:46:16.946381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:23.108 [2024-11-02 23:46:16.946392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:23.108 [2024-11-02 23:46:16.946623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:23.108 [2024-11-02 23:46:16.946793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:23.108 [2024-11-02 23:46:16.946805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:23.108 [2024-11-02 23:46:16.946949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:23.108 pt0 00:06:23.108 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.108 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:23.108 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.108 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.109 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.109 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:23.109 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:23.109 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:23.109 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:23.109 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.109 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.109 [2024-11-02 23:46:16.970566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:23.109 23:46:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.109 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:23.109 23:46:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71517 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 71517 ']' 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 71517 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71517 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:23.109 killing process with pid 71517 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71517' 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 71517 00:06:23.109 [2024-11-02 23:46:17.050714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:23.109 [2024-11-02 23:46:17.050855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:23.109 [2024-11-02 23:46:17.050926] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:23.109 [2024-11-02 23:46:17.050936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:23.109 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 71517 00:06:23.367 [2024-11-02 23:46:17.209355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:23.367 23:46:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:23.367 00:06:23.367 real 0m1.981s 00:06:23.367 user 0m2.305s 00:06:23.367 sys 0m0.462s 00:06:23.367 ************************************ 00:06:23.367 END TEST raid1_resize_superblock_test 00:06:23.367 ************************************ 00:06:23.367 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:23.367 23:46:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.626 23:46:17 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:23.626 23:46:17 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:23.626 23:46:17 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:23.626 23:46:17 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:23.626 23:46:17 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:23.626 23:46:17 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:23.626 23:46:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:23.626 23:46:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.626 23:46:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:23.626 ************************************ 00:06:23.626 START TEST raid_function_test_raid0 00:06:23.626 ************************************ 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:23.627 Process raid pid: 71592 00:06:23.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71592 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71592' 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71592 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 71592 ']' 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:23.627 23:46:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:23.627 [2024-11-02 23:46:17.583946] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:23.627 [2024-11-02 23:46:17.584169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.886 [2024-11-02 23:46:17.739240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.886 [2024-11-02 23:46:17.764893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.886 [2024-11-02 23:46:17.807716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.886 [2024-11-02 23:46:17.807819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 Base_1 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 Base_2 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 [2024-11-02 23:46:18.454365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:24.482 [2024-11-02 23:46:18.456249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:24.482 [2024-11-02 23:46:18.456348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:24.482 [2024-11-02 23:46:18.456386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:24.482 [2024-11-02 23:46:18.456698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:24.482 [2024-11-02 23:46:18.456873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:24.482 [2024-11-02 23:46:18.456919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:24.482 [2024-11-02 23:46:18.457074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.482 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:24.483 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.483 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:24.483 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.483 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:24.483 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:24.742 [2024-11-02 23:46:18.689996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:24.742 /dev/nbd0 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.742 1+0 records in 00:06:24.742 1+0 records out 00:06:24.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551899 s, 7.4 MB/s 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:24.742 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:25.001 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.001 { 00:06:25.001 "nbd_device": "/dev/nbd0", 00:06:25.001 "bdev_name": "raid" 00:06:25.001 } 00:06:25.001 ]' 00:06:25.001 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.001 { 00:06:25.001 "nbd_device": "/dev/nbd0", 00:06:25.001 "bdev_name": "raid" 00:06:25.001 } 00:06:25.001 ]' 00:06:25.001 23:46:18 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:25.001 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:25.002 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:25.002 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:25.002 4096+0 records in 00:06:25.002 4096+0 records out 00:06:25.002 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.033231 s, 63.1 MB/s 00:06:25.002 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:25.261 4096+0 records in 00:06:25.261 4096+0 records out 00:06:25.261 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.165764 s, 12.7 MB/s 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:25.261 128+0 records in 00:06:25.261 128+0 records out 00:06:25.261 65536 bytes (66 kB, 64 KiB) copied, 0.00131923 s, 49.7 MB/s 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:25.261 2035+0 records in 00:06:25.261 2035+0 records out 00:06:25.261 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0154385 s, 67.5 MB/s 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:25.261 456+0 records in 00:06:25.261 456+0 records out 00:06:25.261 233472 bytes (233 kB, 228 KiB) copied, 0.00393316 s, 59.4 MB/s 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:25.261 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.521 [2024-11-02 23:46:19.567366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:25.521 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71592 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 71592 ']' 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 71592 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71592 00:06:25.789 killing process with pid 71592 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:25.789 23:46:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71592' 00:06:25.790 23:46:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 71592 00:06:25.790 [2024-11-02 23:46:19.874078] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:25.790 [2024-11-02 23:46:19.874199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:25.790 [2024-11-02 23:46:19.874252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:25.790 23:46:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 71592 00:06:25.790 [2024-11-02 23:46:19.874265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:26.057 [2024-11-02 23:46:19.898218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:26.057 23:46:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:26.057 00:06:26.057 real 0m2.594s 00:06:26.057 user 0m3.228s 00:06:26.057 sys 0m0.880s 00:06:26.057 23:46:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.057 23:46:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:26.057 ************************************ 00:06:26.057 END TEST raid_function_test_raid0 00:06:26.057 ************************************ 00:06:26.316 23:46:20 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:26.316 23:46:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:26.316 23:46:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.316 23:46:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:26.316 ************************************ 00:06:26.316 START TEST raid_function_test_concat 00:06:26.316 ************************************ 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71706 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71706' 00:06:26.316 Process raid pid: 71706 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71706 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 71706 ']' 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:26.316 23:46:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:26.316 [2024-11-02 23:46:20.254564] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:26.316 [2024-11-02 23:46:20.254781] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.316 [2024-11-02 23:46:20.387108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.576 [2024-11-02 23:46:20.412177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.576 [2024-11-02 23:46:20.454282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:26.576 [2024-11-02 23:46:20.454390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:27.144 Base_1 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:27.144 Base_2 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:27.144 [2024-11-02 23:46:21.116413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:27.144 [2024-11-02 23:46:21.118162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:27.144 [2024-11-02 23:46:21.118228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:27.144 [2024-11-02 23:46:21.118241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:27.144 [2024-11-02 23:46:21.118528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:27.144 [2024-11-02 23:46:21.118638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:27.144 [2024-11-02 23:46:21.118648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:27.144 [2024-11-02 23:46:21.118794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:27.144 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:27.404 [2024-11-02 23:46:21.360057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:27.404 /dev/nbd0 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.404 1+0 records in 00:06:27.404 1+0 records out 00:06:27.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582026 s, 7.0 MB/s 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:27.404 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.664 { 00:06:27.664 "nbd_device": "/dev/nbd0", 00:06:27.664 "bdev_name": "raid" 00:06:27.664 } 00:06:27.664 ]' 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.664 { 00:06:27.664 "nbd_device": "/dev/nbd0", 00:06:27.664 "bdev_name": "raid" 00:06:27.664 } 00:06:27.664 ]' 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:27.664 4096+0 records in 00:06:27.664 4096+0 records out 00:06:27.664 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0348528 s, 60.2 MB/s 00:06:27.664 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:27.924 4096+0 records in 00:06:27.924 4096+0 records out 00:06:27.924 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.22701 s, 9.2 MB/s 00:06:27.924 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:27.924 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:27.924 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:27.924 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:27.924 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:27.924 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:27.924 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:27.924 128+0 records in 00:06:27.924 128+0 records out 00:06:27.924 65536 bytes (66 kB, 64 KiB) copied, 0.00108915 s, 60.2 MB/s 00:06:27.924 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:27.924 23:46:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:27.924 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:28.183 2035+0 records in 00:06:28.183 2035+0 records out 00:06:28.183 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0138906 s, 75.0 MB/s 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:28.183 456+0 records in 00:06:28.183 456+0 records out 00:06:28.183 233472 bytes (233 kB, 228 KiB) copied, 0.00388617 s, 60.1 MB/s 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.183 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.443 [2024-11-02 23:46:22.287409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.443 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71706 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 71706 ']' 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 71706 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71706 00:06:28.704 killing process with pid 71706 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71706' 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 71706 00:06:28.704 [2024-11-02 23:46:22.605592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:28.704 [2024-11-02 23:46:22.605711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:28.704 [2024-11-02 23:46:22.605776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:28.704 [2024-11-02 23:46:22.605788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:28.704 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 71706 00:06:28.704 [2024-11-02 23:46:22.628675] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:28.964 ************************************ 00:06:28.964 END TEST raid_function_test_concat 00:06:28.964 ************************************ 00:06:28.964 23:46:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:28.964 00:06:28.964 real 0m2.663s 00:06:28.964 user 0m3.288s 00:06:28.964 sys 0m0.916s 00:06:28.964 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.964 23:46:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:28.965 23:46:22 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:28.965 23:46:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:28.965 23:46:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.965 23:46:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:28.965 ************************************ 00:06:28.965 START TEST raid0_resize_test 00:06:28.965 ************************************ 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71818 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71818' 00:06:28.965 Process raid pid: 71818 00:06:28.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71818 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 71818 ']' 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:28.965 23:46:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.965 [2024-11-02 23:46:22.988863] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:28.965 [2024-11-02 23:46:22.988982] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.225 [2024-11-02 23:46:23.143962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.225 [2024-11-02 23:46:23.168634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.225 [2024-11-02 23:46:23.209800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.225 [2024-11-02 23:46:23.209919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.794 Base_1 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.794 Base_2 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.794 [2024-11-02 23:46:23.874275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:29.794 [2024-11-02 23:46:23.876082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:29.794 [2024-11-02 23:46:23.876129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:29.794 [2024-11-02 23:46:23.876139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:29.794 [2024-11-02 23:46:23.876385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:29.794 [2024-11-02 23:46:23.876474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:29.794 [2024-11-02 23:46:23.876482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:29.794 [2024-11-02 23:46:23.876602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.794 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.794 [2024-11-02 23:46:23.886242] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:29.794 [2024-11-02 23:46:23.886265] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:30.054 true 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.054 [2024-11-02 23:46:23.902400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.054 [2024-11-02 23:46:23.950112] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:30.054 [2024-11-02 23:46:23.950133] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:30.054 [2024-11-02 23:46:23.950158] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:30.054 true 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.054 [2024-11-02 23:46:23.966262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:30.054 23:46:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71818 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 71818 ']' 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 71818 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71818 00:06:30.054 killing process with pid 71818 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71818' 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 71818 00:06:30.054 [2024-11-02 23:46:24.043762] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.054 [2024-11-02 23:46:24.043844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.054 [2024-11-02 23:46:24.043888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.054 [2024-11-02 23:46:24.043896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:30.054 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 71818 00:06:30.054 [2024-11-02 23:46:24.045315] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:30.314 23:46:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:30.314 00:06:30.314 real 0m1.345s 00:06:30.314 user 0m1.557s 00:06:30.314 sys 0m0.286s 00:06:30.314 ************************************ 00:06:30.314 END TEST raid0_resize_test 00:06:30.314 ************************************ 00:06:30.314 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.314 23:46:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.314 23:46:24 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:30.314 23:46:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:30.314 23:46:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.314 23:46:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.314 ************************************ 00:06:30.314 START TEST raid1_resize_test 00:06:30.314 ************************************ 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71868 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71868' 00:06:30.314 Process raid pid: 71868 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71868 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 71868 ']' 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.314 23:46:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.314 [2024-11-02 23:46:24.401415] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:30.314 [2024-11-02 23:46:24.401623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.574 [2024-11-02 23:46:24.556113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.574 [2024-11-02 23:46:24.583136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.574 [2024-11-02 23:46:24.624875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:30.574 [2024-11-02 23:46:24.624912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.148 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.149 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:06:31.149 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:31.149 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.149 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.410 Base_1 00:06:31.410 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.410 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:31.410 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.410 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.410 Base_2 00:06:31.410 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.410 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:31.410 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:31.410 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.410 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.410 [2024-11-02 23:46:25.261556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:31.410 [2024-11-02 23:46:25.263368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:31.410 [2024-11-02 23:46:25.263423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:31.410 [2024-11-02 23:46:25.263439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:31.410 [2024-11-02 23:46:25.263724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:31.410 [2024-11-02 23:46:25.263852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:31.410 [2024-11-02 23:46:25.263861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:31.410 [2024-11-02 23:46:25.263966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.411 [2024-11-02 23:46:25.273524] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:31.411 [2024-11-02 23:46:25.273554] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:31.411 true 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.411 [2024-11-02 23:46:25.289673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.411 [2024-11-02 23:46:25.337389] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:31.411 [2024-11-02 23:46:25.337447] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:31.411 [2024-11-02 23:46:25.337508] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:31.411 true 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.411 [2024-11-02 23:46:25.353547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71868 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 71868 ']' 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 71868 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71868 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71868' 00:06:31.411 killing process with pid 71868 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 71868 00:06:31.411 [2024-11-02 23:46:25.423494] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:31.411 [2024-11-02 23:46:25.423628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:31.411 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 71868 00:06:31.411 [2024-11-02 23:46:25.424067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:31.411 [2024-11-02 23:46:25.424139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:31.411 [2024-11-02 23:46:25.425299] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:31.670 23:46:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:31.670 00:06:31.670 real 0m1.312s 00:06:31.670 user 0m1.489s 00:06:31.670 sys 0m0.285s 00:06:31.670 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.670 23:46:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.670 ************************************ 00:06:31.670 END TEST raid1_resize_test 00:06:31.670 ************************************ 00:06:31.670 23:46:25 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:31.670 23:46:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:31.670 23:46:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:31.670 23:46:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:31.670 23:46:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.670 23:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:31.670 ************************************ 00:06:31.670 START TEST raid_state_function_test 00:06:31.670 ************************************ 00:06:31.670 23:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:06:31.670 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:31.670 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:31.670 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:31.670 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:31.670 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:31.670 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71920 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71920' 00:06:31.671 Process raid pid: 71920 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71920 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71920 ']' 00:06:31.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.671 23:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.930 [2024-11-02 23:46:25.801135] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:31.930 [2024-11-02 23:46:25.801264] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.930 [2024-11-02 23:46:25.959912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.930 [2024-11-02 23:46:25.985088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.189 [2024-11-02 23:46:26.027294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.189 [2024-11-02 23:46:26.027330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.759 [2024-11-02 23:46:26.620579] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:32.759 [2024-11-02 23:46:26.620705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:32.759 [2024-11-02 23:46:26.620719] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:32.759 [2024-11-02 23:46:26.620731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:32.759 "name": "Existed_Raid", 00:06:32.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:32.759 "strip_size_kb": 64, 00:06:32.759 "state": "configuring", 00:06:32.759 "raid_level": "raid0", 00:06:32.759 "superblock": false, 00:06:32.759 "num_base_bdevs": 2, 00:06:32.759 "num_base_bdevs_discovered": 0, 00:06:32.759 "num_base_bdevs_operational": 2, 00:06:32.759 "base_bdevs_list": [ 00:06:32.759 { 00:06:32.759 "name": "BaseBdev1", 00:06:32.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:32.759 "is_configured": false, 00:06:32.759 "data_offset": 0, 00:06:32.759 "data_size": 0 00:06:32.759 }, 00:06:32.759 { 00:06:32.759 "name": "BaseBdev2", 00:06:32.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:32.759 "is_configured": false, 00:06:32.759 "data_offset": 0, 00:06:32.759 "data_size": 0 00:06:32.759 } 00:06:32.759 ] 00:06:32.759 }' 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:32.759 23:46:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.063 [2024-11-02 23:46:27.027882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:33.063 [2024-11-02 23:46:27.027971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.063 [2024-11-02 23:46:27.039854] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:33.063 [2024-11-02 23:46:27.039947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:33.063 [2024-11-02 23:46:27.039986] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:33.063 [2024-11-02 23:46:27.040020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.063 [2024-11-02 23:46:27.060642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:33.063 BaseBdev1 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.063 [ 00:06:33.063 { 00:06:33.063 "name": "BaseBdev1", 00:06:33.063 "aliases": [ 00:06:33.063 "4cb29adb-26a2-44bf-ab2e-ee94296949dc" 00:06:33.063 ], 00:06:33.063 "product_name": "Malloc disk", 00:06:33.063 "block_size": 512, 00:06:33.063 "num_blocks": 65536, 00:06:33.063 "uuid": "4cb29adb-26a2-44bf-ab2e-ee94296949dc", 00:06:33.063 "assigned_rate_limits": { 00:06:33.063 "rw_ios_per_sec": 0, 00:06:33.063 "rw_mbytes_per_sec": 0, 00:06:33.063 "r_mbytes_per_sec": 0, 00:06:33.063 "w_mbytes_per_sec": 0 00:06:33.063 }, 00:06:33.063 "claimed": true, 00:06:33.063 "claim_type": "exclusive_write", 00:06:33.063 "zoned": false, 00:06:33.063 "supported_io_types": { 00:06:33.063 "read": true, 00:06:33.063 "write": true, 00:06:33.063 "unmap": true, 00:06:33.063 "flush": true, 00:06:33.063 "reset": true, 00:06:33.063 "nvme_admin": false, 00:06:33.063 "nvme_io": false, 00:06:33.063 "nvme_io_md": false, 00:06:33.063 "write_zeroes": true, 00:06:33.063 "zcopy": true, 00:06:33.063 "get_zone_info": false, 00:06:33.063 "zone_management": false, 00:06:33.063 "zone_append": false, 00:06:33.063 "compare": false, 00:06:33.063 "compare_and_write": false, 00:06:33.063 "abort": true, 00:06:33.063 "seek_hole": false, 00:06:33.063 "seek_data": false, 00:06:33.063 "copy": true, 00:06:33.063 "nvme_iov_md": false 00:06:33.063 }, 00:06:33.063 "memory_domains": [ 00:06:33.063 { 00:06:33.063 "dma_device_id": "system", 00:06:33.063 "dma_device_type": 1 00:06:33.063 }, 00:06:33.063 { 00:06:33.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.063 "dma_device_type": 2 00:06:33.063 } 00:06:33.063 ], 00:06:33.063 "driver_specific": {} 00:06:33.063 } 00:06:33.063 ] 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.063 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.064 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.338 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.338 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.338 "name": "Existed_Raid", 00:06:33.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.338 "strip_size_kb": 64, 00:06:33.338 "state": "configuring", 00:06:33.338 "raid_level": "raid0", 00:06:33.338 "superblock": false, 00:06:33.338 "num_base_bdevs": 2, 00:06:33.338 "num_base_bdevs_discovered": 1, 00:06:33.338 "num_base_bdevs_operational": 2, 00:06:33.338 "base_bdevs_list": [ 00:06:33.338 { 00:06:33.338 "name": "BaseBdev1", 00:06:33.338 "uuid": "4cb29adb-26a2-44bf-ab2e-ee94296949dc", 00:06:33.338 "is_configured": true, 00:06:33.338 "data_offset": 0, 00:06:33.339 "data_size": 65536 00:06:33.339 }, 00:06:33.339 { 00:06:33.339 "name": "BaseBdev2", 00:06:33.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.339 "is_configured": false, 00:06:33.339 "data_offset": 0, 00:06:33.339 "data_size": 0 00:06:33.339 } 00:06:33.339 ] 00:06:33.339 }' 00:06:33.339 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.339 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.598 [2024-11-02 23:46:27.527895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:33.598 [2024-11-02 23:46:27.527952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.598 [2024-11-02 23:46:27.539928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:33.598 [2024-11-02 23:46:27.541824] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:33.598 [2024-11-02 23:46:27.541905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.598 "name": "Existed_Raid", 00:06:33.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.598 "strip_size_kb": 64, 00:06:33.598 "state": "configuring", 00:06:33.598 "raid_level": "raid0", 00:06:33.598 "superblock": false, 00:06:33.598 "num_base_bdevs": 2, 00:06:33.598 "num_base_bdevs_discovered": 1, 00:06:33.598 "num_base_bdevs_operational": 2, 00:06:33.598 "base_bdevs_list": [ 00:06:33.598 { 00:06:33.598 "name": "BaseBdev1", 00:06:33.598 "uuid": "4cb29adb-26a2-44bf-ab2e-ee94296949dc", 00:06:33.598 "is_configured": true, 00:06:33.598 "data_offset": 0, 00:06:33.598 "data_size": 65536 00:06:33.598 }, 00:06:33.598 { 00:06:33.598 "name": "BaseBdev2", 00:06:33.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.598 "is_configured": false, 00:06:33.598 "data_offset": 0, 00:06:33.598 "data_size": 0 00:06:33.598 } 00:06:33.598 ] 00:06:33.598 }' 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.598 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.167 23:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:34.167 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.167 23:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.167 [2024-11-02 23:46:28.013941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:34.167 [2024-11-02 23:46:28.014076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:34.167 [2024-11-02 23:46:28.014123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:34.167 [2024-11-02 23:46:28.014449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:34.167 [2024-11-02 23:46:28.014638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:34.167 [2024-11-02 23:46:28.014690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:34.168 [2024-11-02 23:46:28.014960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.168 BaseBdev2 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.168 [ 00:06:34.168 { 00:06:34.168 "name": "BaseBdev2", 00:06:34.168 "aliases": [ 00:06:34.168 "ecef1857-e9a9-4ad3-a31b-097a93557d66" 00:06:34.168 ], 00:06:34.168 "product_name": "Malloc disk", 00:06:34.168 "block_size": 512, 00:06:34.168 "num_blocks": 65536, 00:06:34.168 "uuid": "ecef1857-e9a9-4ad3-a31b-097a93557d66", 00:06:34.168 "assigned_rate_limits": { 00:06:34.168 "rw_ios_per_sec": 0, 00:06:34.168 "rw_mbytes_per_sec": 0, 00:06:34.168 "r_mbytes_per_sec": 0, 00:06:34.168 "w_mbytes_per_sec": 0 00:06:34.168 }, 00:06:34.168 "claimed": true, 00:06:34.168 "claim_type": "exclusive_write", 00:06:34.168 "zoned": false, 00:06:34.168 "supported_io_types": { 00:06:34.168 "read": true, 00:06:34.168 "write": true, 00:06:34.168 "unmap": true, 00:06:34.168 "flush": true, 00:06:34.168 "reset": true, 00:06:34.168 "nvme_admin": false, 00:06:34.168 "nvme_io": false, 00:06:34.168 "nvme_io_md": false, 00:06:34.168 "write_zeroes": true, 00:06:34.168 "zcopy": true, 00:06:34.168 "get_zone_info": false, 00:06:34.168 "zone_management": false, 00:06:34.168 "zone_append": false, 00:06:34.168 "compare": false, 00:06:34.168 "compare_and_write": false, 00:06:34.168 "abort": true, 00:06:34.168 "seek_hole": false, 00:06:34.168 "seek_data": false, 00:06:34.168 "copy": true, 00:06:34.168 "nvme_iov_md": false 00:06:34.168 }, 00:06:34.168 "memory_domains": [ 00:06:34.168 { 00:06:34.168 "dma_device_id": "system", 00:06:34.168 "dma_device_type": 1 00:06:34.168 }, 00:06:34.168 { 00:06:34.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.168 "dma_device_type": 2 00:06:34.168 } 00:06:34.168 ], 00:06:34.168 "driver_specific": {} 00:06:34.168 } 00:06:34.168 ] 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.168 "name": "Existed_Raid", 00:06:34.168 "uuid": "f2e6bc10-15ce-4f4a-9607-c4dba5e0a714", 00:06:34.168 "strip_size_kb": 64, 00:06:34.168 "state": "online", 00:06:34.168 "raid_level": "raid0", 00:06:34.168 "superblock": false, 00:06:34.168 "num_base_bdevs": 2, 00:06:34.168 "num_base_bdevs_discovered": 2, 00:06:34.168 "num_base_bdevs_operational": 2, 00:06:34.168 "base_bdevs_list": [ 00:06:34.168 { 00:06:34.168 "name": "BaseBdev1", 00:06:34.168 "uuid": "4cb29adb-26a2-44bf-ab2e-ee94296949dc", 00:06:34.168 "is_configured": true, 00:06:34.168 "data_offset": 0, 00:06:34.168 "data_size": 65536 00:06:34.168 }, 00:06:34.168 { 00:06:34.168 "name": "BaseBdev2", 00:06:34.168 "uuid": "ecef1857-e9a9-4ad3-a31b-097a93557d66", 00:06:34.168 "is_configured": true, 00:06:34.168 "data_offset": 0, 00:06:34.168 "data_size": 65536 00:06:34.168 } 00:06:34.168 ] 00:06:34.168 }' 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.168 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:34.427 [2024-11-02 23:46:28.465509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.427 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:34.427 "name": "Existed_Raid", 00:06:34.427 "aliases": [ 00:06:34.427 "f2e6bc10-15ce-4f4a-9607-c4dba5e0a714" 00:06:34.427 ], 00:06:34.427 "product_name": "Raid Volume", 00:06:34.427 "block_size": 512, 00:06:34.427 "num_blocks": 131072, 00:06:34.427 "uuid": "f2e6bc10-15ce-4f4a-9607-c4dba5e0a714", 00:06:34.427 "assigned_rate_limits": { 00:06:34.427 "rw_ios_per_sec": 0, 00:06:34.427 "rw_mbytes_per_sec": 0, 00:06:34.427 "r_mbytes_per_sec": 0, 00:06:34.427 "w_mbytes_per_sec": 0 00:06:34.427 }, 00:06:34.427 "claimed": false, 00:06:34.427 "zoned": false, 00:06:34.427 "supported_io_types": { 00:06:34.427 "read": true, 00:06:34.428 "write": true, 00:06:34.428 "unmap": true, 00:06:34.428 "flush": true, 00:06:34.428 "reset": true, 00:06:34.428 "nvme_admin": false, 00:06:34.428 "nvme_io": false, 00:06:34.428 "nvme_io_md": false, 00:06:34.428 "write_zeroes": true, 00:06:34.428 "zcopy": false, 00:06:34.428 "get_zone_info": false, 00:06:34.428 "zone_management": false, 00:06:34.428 "zone_append": false, 00:06:34.428 "compare": false, 00:06:34.428 "compare_and_write": false, 00:06:34.428 "abort": false, 00:06:34.428 "seek_hole": false, 00:06:34.428 "seek_data": false, 00:06:34.428 "copy": false, 00:06:34.428 "nvme_iov_md": false 00:06:34.428 }, 00:06:34.428 "memory_domains": [ 00:06:34.428 { 00:06:34.428 "dma_device_id": "system", 00:06:34.428 "dma_device_type": 1 00:06:34.428 }, 00:06:34.428 { 00:06:34.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.428 "dma_device_type": 2 00:06:34.428 }, 00:06:34.428 { 00:06:34.428 "dma_device_id": "system", 00:06:34.428 "dma_device_type": 1 00:06:34.428 }, 00:06:34.428 { 00:06:34.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.428 "dma_device_type": 2 00:06:34.428 } 00:06:34.428 ], 00:06:34.428 "driver_specific": { 00:06:34.428 "raid": { 00:06:34.428 "uuid": "f2e6bc10-15ce-4f4a-9607-c4dba5e0a714", 00:06:34.428 "strip_size_kb": 64, 00:06:34.428 "state": "online", 00:06:34.428 "raid_level": "raid0", 00:06:34.428 "superblock": false, 00:06:34.428 "num_base_bdevs": 2, 00:06:34.428 "num_base_bdevs_discovered": 2, 00:06:34.428 "num_base_bdevs_operational": 2, 00:06:34.428 "base_bdevs_list": [ 00:06:34.428 { 00:06:34.428 "name": "BaseBdev1", 00:06:34.428 "uuid": "4cb29adb-26a2-44bf-ab2e-ee94296949dc", 00:06:34.428 "is_configured": true, 00:06:34.428 "data_offset": 0, 00:06:34.428 "data_size": 65536 00:06:34.428 }, 00:06:34.428 { 00:06:34.428 "name": "BaseBdev2", 00:06:34.428 "uuid": "ecef1857-e9a9-4ad3-a31b-097a93557d66", 00:06:34.428 "is_configured": true, 00:06:34.428 "data_offset": 0, 00:06:34.428 "data_size": 65536 00:06:34.428 } 00:06:34.428 ] 00:06:34.428 } 00:06:34.428 } 00:06:34.428 }' 00:06:34.428 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:34.688 BaseBdev2' 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.688 [2024-11-02 23:46:28.700948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:34.688 [2024-11-02 23:46:28.701030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:34.688 [2024-11-02 23:46:28.701121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.688 "name": "Existed_Raid", 00:06:34.688 "uuid": "f2e6bc10-15ce-4f4a-9607-c4dba5e0a714", 00:06:34.688 "strip_size_kb": 64, 00:06:34.688 "state": "offline", 00:06:34.688 "raid_level": "raid0", 00:06:34.688 "superblock": false, 00:06:34.688 "num_base_bdevs": 2, 00:06:34.688 "num_base_bdevs_discovered": 1, 00:06:34.688 "num_base_bdevs_operational": 1, 00:06:34.688 "base_bdevs_list": [ 00:06:34.688 { 00:06:34.688 "name": null, 00:06:34.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.688 "is_configured": false, 00:06:34.688 "data_offset": 0, 00:06:34.688 "data_size": 65536 00:06:34.688 }, 00:06:34.688 { 00:06:34.688 "name": "BaseBdev2", 00:06:34.688 "uuid": "ecef1857-e9a9-4ad3-a31b-097a93557d66", 00:06:34.688 "is_configured": true, 00:06:34.688 "data_offset": 0, 00:06:34.688 "data_size": 65536 00:06:34.688 } 00:06:34.688 ] 00:06:34.688 }' 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.688 23:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.267 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:35.267 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.268 [2024-11-02 23:46:29.211765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:35.268 [2024-11-02 23:46:29.211892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71920 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71920 ']' 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71920 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71920 00:06:35.268 killing process with pid 71920 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71920' 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71920 00:06:35.268 [2024-11-02 23:46:29.305321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:35.268 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71920 00:06:35.268 [2024-11-02 23:46:29.306321] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:35.528 00:06:35.528 real 0m3.812s 00:06:35.528 user 0m6.022s 00:06:35.528 sys 0m0.766s 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.528 ************************************ 00:06:35.528 END TEST raid_state_function_test 00:06:35.528 ************************************ 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.528 23:46:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:35.528 23:46:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:35.528 23:46:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.528 23:46:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:35.528 ************************************ 00:06:35.528 START TEST raid_state_function_test_sb 00:06:35.528 ************************************ 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72156 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:35.528 Process raid pid: 72156 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72156' 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72156 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72156 ']' 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.528 23:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.788 [2024-11-02 23:46:29.681687] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:35.788 [2024-11-02 23:46:29.681821] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.788 [2024-11-02 23:46:29.814682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.788 [2024-11-02 23:46:29.840554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.047 [2024-11-02 23:46:29.882478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.047 [2024-11-02 23:46:29.882515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.616 [2024-11-02 23:46:30.515486] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:36.616 [2024-11-02 23:46:30.515550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:36.616 [2024-11-02 23:46:30.515560] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:36.616 [2024-11-02 23:46:30.515571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:36.616 "name": "Existed_Raid", 00:06:36.616 "uuid": "0a655a91-8d51-4066-ab1f-ccd13c81346f", 00:06:36.616 "strip_size_kb": 64, 00:06:36.616 "state": "configuring", 00:06:36.616 "raid_level": "raid0", 00:06:36.616 "superblock": true, 00:06:36.616 "num_base_bdevs": 2, 00:06:36.616 "num_base_bdevs_discovered": 0, 00:06:36.616 "num_base_bdevs_operational": 2, 00:06:36.616 "base_bdevs_list": [ 00:06:36.616 { 00:06:36.616 "name": "BaseBdev1", 00:06:36.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.616 "is_configured": false, 00:06:36.616 "data_offset": 0, 00:06:36.616 "data_size": 0 00:06:36.616 }, 00:06:36.616 { 00:06:36.616 "name": "BaseBdev2", 00:06:36.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.616 "is_configured": false, 00:06:36.616 "data_offset": 0, 00:06:36.616 "data_size": 0 00:06:36.616 } 00:06:36.616 ] 00:06:36.616 }' 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:36.616 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 [2024-11-02 23:46:30.910730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:36.876 [2024-11-02 23:46:30.910853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 [2024-11-02 23:46:30.922707] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:36.876 [2024-11-02 23:46:30.922798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:36.876 [2024-11-02 23:46:30.922829] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:36.876 [2024-11-02 23:46:30.922867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 [2024-11-02 23:46:30.943554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:36.876 BaseBdev1 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.876 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.876 [ 00:06:37.136 { 00:06:37.136 "name": "BaseBdev1", 00:06:37.136 "aliases": [ 00:06:37.136 "e91ee3ef-e04b-473a-bb51-0f8207760d88" 00:06:37.136 ], 00:06:37.136 "product_name": "Malloc disk", 00:06:37.136 "block_size": 512, 00:06:37.136 "num_blocks": 65536, 00:06:37.136 "uuid": "e91ee3ef-e04b-473a-bb51-0f8207760d88", 00:06:37.136 "assigned_rate_limits": { 00:06:37.136 "rw_ios_per_sec": 0, 00:06:37.136 "rw_mbytes_per_sec": 0, 00:06:37.136 "r_mbytes_per_sec": 0, 00:06:37.136 "w_mbytes_per_sec": 0 00:06:37.136 }, 00:06:37.136 "claimed": true, 00:06:37.136 "claim_type": "exclusive_write", 00:06:37.136 "zoned": false, 00:06:37.136 "supported_io_types": { 00:06:37.136 "read": true, 00:06:37.136 "write": true, 00:06:37.136 "unmap": true, 00:06:37.136 "flush": true, 00:06:37.136 "reset": true, 00:06:37.136 "nvme_admin": false, 00:06:37.136 "nvme_io": false, 00:06:37.136 "nvme_io_md": false, 00:06:37.136 "write_zeroes": true, 00:06:37.136 "zcopy": true, 00:06:37.136 "get_zone_info": false, 00:06:37.136 "zone_management": false, 00:06:37.136 "zone_append": false, 00:06:37.136 "compare": false, 00:06:37.136 "compare_and_write": false, 00:06:37.136 "abort": true, 00:06:37.136 "seek_hole": false, 00:06:37.136 "seek_data": false, 00:06:37.136 "copy": true, 00:06:37.136 "nvme_iov_md": false 00:06:37.136 }, 00:06:37.136 "memory_domains": [ 00:06:37.136 { 00:06:37.136 "dma_device_id": "system", 00:06:37.136 "dma_device_type": 1 00:06:37.136 }, 00:06:37.136 { 00:06:37.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.136 "dma_device_type": 2 00:06:37.136 } 00:06:37.136 ], 00:06:37.136 "driver_specific": {} 00:06:37.136 } 00:06:37.136 ] 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.136 23:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.136 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.136 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.136 "name": "Existed_Raid", 00:06:37.136 "uuid": "cd3bf509-c99d-40fd-a9b0-db160b85792d", 00:06:37.136 "strip_size_kb": 64, 00:06:37.136 "state": "configuring", 00:06:37.136 "raid_level": "raid0", 00:06:37.136 "superblock": true, 00:06:37.136 "num_base_bdevs": 2, 00:06:37.136 "num_base_bdevs_discovered": 1, 00:06:37.136 "num_base_bdevs_operational": 2, 00:06:37.136 "base_bdevs_list": [ 00:06:37.136 { 00:06:37.136 "name": "BaseBdev1", 00:06:37.136 "uuid": "e91ee3ef-e04b-473a-bb51-0f8207760d88", 00:06:37.136 "is_configured": true, 00:06:37.136 "data_offset": 2048, 00:06:37.136 "data_size": 63488 00:06:37.136 }, 00:06:37.136 { 00:06:37.136 "name": "BaseBdev2", 00:06:37.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.136 "is_configured": false, 00:06:37.136 "data_offset": 0, 00:06:37.136 "data_size": 0 00:06:37.136 } 00:06:37.136 ] 00:06:37.136 }' 00:06:37.136 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.136 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.396 [2024-11-02 23:46:31.414833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:37.396 [2024-11-02 23:46:31.414952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.396 [2024-11-02 23:46:31.426889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:37.396 [2024-11-02 23:46:31.428842] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.396 [2024-11-02 23:46:31.428941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.396 "name": "Existed_Raid", 00:06:37.396 "uuid": "5ad0993b-3a16-4e91-ba7f-1ffe3180630c", 00:06:37.396 "strip_size_kb": 64, 00:06:37.396 "state": "configuring", 00:06:37.396 "raid_level": "raid0", 00:06:37.396 "superblock": true, 00:06:37.396 "num_base_bdevs": 2, 00:06:37.396 "num_base_bdevs_discovered": 1, 00:06:37.396 "num_base_bdevs_operational": 2, 00:06:37.396 "base_bdevs_list": [ 00:06:37.396 { 00:06:37.396 "name": "BaseBdev1", 00:06:37.396 "uuid": "e91ee3ef-e04b-473a-bb51-0f8207760d88", 00:06:37.396 "is_configured": true, 00:06:37.396 "data_offset": 2048, 00:06:37.396 "data_size": 63488 00:06:37.396 }, 00:06:37.396 { 00:06:37.396 "name": "BaseBdev2", 00:06:37.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.396 "is_configured": false, 00:06:37.396 "data_offset": 0, 00:06:37.396 "data_size": 0 00:06:37.396 } 00:06:37.396 ] 00:06:37.396 }' 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.396 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.967 [2024-11-02 23:46:31.885147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:37.967 [2024-11-02 23:46:31.885360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:37.967 [2024-11-02 23:46:31.885381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:37.967 BaseBdev2 00:06:37.967 [2024-11-02 23:46:31.885638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:37.967 [2024-11-02 23:46:31.885786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:37.967 [2024-11-02 23:46:31.885801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:37.967 [2024-11-02 23:46:31.885917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:37.967 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.968 [ 00:06:37.968 { 00:06:37.968 "name": "BaseBdev2", 00:06:37.968 "aliases": [ 00:06:37.968 "7ae7bd22-e42d-4068-9e08-cce55b96d174" 00:06:37.968 ], 00:06:37.968 "product_name": "Malloc disk", 00:06:37.968 "block_size": 512, 00:06:37.968 "num_blocks": 65536, 00:06:37.968 "uuid": "7ae7bd22-e42d-4068-9e08-cce55b96d174", 00:06:37.968 "assigned_rate_limits": { 00:06:37.968 "rw_ios_per_sec": 0, 00:06:37.968 "rw_mbytes_per_sec": 0, 00:06:37.968 "r_mbytes_per_sec": 0, 00:06:37.968 "w_mbytes_per_sec": 0 00:06:37.968 }, 00:06:37.968 "claimed": true, 00:06:37.968 "claim_type": "exclusive_write", 00:06:37.968 "zoned": false, 00:06:37.968 "supported_io_types": { 00:06:37.968 "read": true, 00:06:37.968 "write": true, 00:06:37.968 "unmap": true, 00:06:37.968 "flush": true, 00:06:37.968 "reset": true, 00:06:37.968 "nvme_admin": false, 00:06:37.968 "nvme_io": false, 00:06:37.968 "nvme_io_md": false, 00:06:37.968 "write_zeroes": true, 00:06:37.968 "zcopy": true, 00:06:37.968 "get_zone_info": false, 00:06:37.968 "zone_management": false, 00:06:37.968 "zone_append": false, 00:06:37.968 "compare": false, 00:06:37.968 "compare_and_write": false, 00:06:37.968 "abort": true, 00:06:37.968 "seek_hole": false, 00:06:37.968 "seek_data": false, 00:06:37.968 "copy": true, 00:06:37.968 "nvme_iov_md": false 00:06:37.968 }, 00:06:37.968 "memory_domains": [ 00:06:37.968 { 00:06:37.968 "dma_device_id": "system", 00:06:37.968 "dma_device_type": 1 00:06:37.968 }, 00:06:37.968 { 00:06:37.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.968 "dma_device_type": 2 00:06:37.968 } 00:06:37.968 ], 00:06:37.968 "driver_specific": {} 00:06:37.968 } 00:06:37.968 ] 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.968 "name": "Existed_Raid", 00:06:37.968 "uuid": "5ad0993b-3a16-4e91-ba7f-1ffe3180630c", 00:06:37.968 "strip_size_kb": 64, 00:06:37.968 "state": "online", 00:06:37.968 "raid_level": "raid0", 00:06:37.968 "superblock": true, 00:06:37.968 "num_base_bdevs": 2, 00:06:37.968 "num_base_bdevs_discovered": 2, 00:06:37.968 "num_base_bdevs_operational": 2, 00:06:37.968 "base_bdevs_list": [ 00:06:37.968 { 00:06:37.968 "name": "BaseBdev1", 00:06:37.968 "uuid": "e91ee3ef-e04b-473a-bb51-0f8207760d88", 00:06:37.968 "is_configured": true, 00:06:37.968 "data_offset": 2048, 00:06:37.968 "data_size": 63488 00:06:37.968 }, 00:06:37.968 { 00:06:37.968 "name": "BaseBdev2", 00:06:37.968 "uuid": "7ae7bd22-e42d-4068-9e08-cce55b96d174", 00:06:37.968 "is_configured": true, 00:06:37.968 "data_offset": 2048, 00:06:37.968 "data_size": 63488 00:06:37.968 } 00:06:37.968 ] 00:06:37.968 }' 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.968 23:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.541 [2024-11-02 23:46:32.360693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:38.541 "name": "Existed_Raid", 00:06:38.541 "aliases": [ 00:06:38.541 "5ad0993b-3a16-4e91-ba7f-1ffe3180630c" 00:06:38.541 ], 00:06:38.541 "product_name": "Raid Volume", 00:06:38.541 "block_size": 512, 00:06:38.541 "num_blocks": 126976, 00:06:38.541 "uuid": "5ad0993b-3a16-4e91-ba7f-1ffe3180630c", 00:06:38.541 "assigned_rate_limits": { 00:06:38.541 "rw_ios_per_sec": 0, 00:06:38.541 "rw_mbytes_per_sec": 0, 00:06:38.541 "r_mbytes_per_sec": 0, 00:06:38.541 "w_mbytes_per_sec": 0 00:06:38.541 }, 00:06:38.541 "claimed": false, 00:06:38.541 "zoned": false, 00:06:38.541 "supported_io_types": { 00:06:38.541 "read": true, 00:06:38.541 "write": true, 00:06:38.541 "unmap": true, 00:06:38.541 "flush": true, 00:06:38.541 "reset": true, 00:06:38.541 "nvme_admin": false, 00:06:38.541 "nvme_io": false, 00:06:38.541 "nvme_io_md": false, 00:06:38.541 "write_zeroes": true, 00:06:38.541 "zcopy": false, 00:06:38.541 "get_zone_info": false, 00:06:38.541 "zone_management": false, 00:06:38.541 "zone_append": false, 00:06:38.541 "compare": false, 00:06:38.541 "compare_and_write": false, 00:06:38.541 "abort": false, 00:06:38.541 "seek_hole": false, 00:06:38.541 "seek_data": false, 00:06:38.541 "copy": false, 00:06:38.541 "nvme_iov_md": false 00:06:38.541 }, 00:06:38.541 "memory_domains": [ 00:06:38.541 { 00:06:38.541 "dma_device_id": "system", 00:06:38.541 "dma_device_type": 1 00:06:38.541 }, 00:06:38.541 { 00:06:38.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.541 "dma_device_type": 2 00:06:38.541 }, 00:06:38.541 { 00:06:38.541 "dma_device_id": "system", 00:06:38.541 "dma_device_type": 1 00:06:38.541 }, 00:06:38.541 { 00:06:38.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.541 "dma_device_type": 2 00:06:38.541 } 00:06:38.541 ], 00:06:38.541 "driver_specific": { 00:06:38.541 "raid": { 00:06:38.541 "uuid": "5ad0993b-3a16-4e91-ba7f-1ffe3180630c", 00:06:38.541 "strip_size_kb": 64, 00:06:38.541 "state": "online", 00:06:38.541 "raid_level": "raid0", 00:06:38.541 "superblock": true, 00:06:38.541 "num_base_bdevs": 2, 00:06:38.541 "num_base_bdevs_discovered": 2, 00:06:38.541 "num_base_bdevs_operational": 2, 00:06:38.541 "base_bdevs_list": [ 00:06:38.541 { 00:06:38.541 "name": "BaseBdev1", 00:06:38.541 "uuid": "e91ee3ef-e04b-473a-bb51-0f8207760d88", 00:06:38.541 "is_configured": true, 00:06:38.541 "data_offset": 2048, 00:06:38.541 "data_size": 63488 00:06:38.541 }, 00:06:38.541 { 00:06:38.541 "name": "BaseBdev2", 00:06:38.541 "uuid": "7ae7bd22-e42d-4068-9e08-cce55b96d174", 00:06:38.541 "is_configured": true, 00:06:38.541 "data_offset": 2048, 00:06:38.541 "data_size": 63488 00:06:38.541 } 00:06:38.541 ] 00:06:38.541 } 00:06:38.541 } 00:06:38.541 }' 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:38.541 BaseBdev2' 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.541 [2024-11-02 23:46:32.612118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:38.541 [2024-11-02 23:46:32.612219] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.541 [2024-11-02 23:46:32.612313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.541 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.801 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.801 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.801 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.801 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.801 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.801 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.801 "name": "Existed_Raid", 00:06:38.801 "uuid": "5ad0993b-3a16-4e91-ba7f-1ffe3180630c", 00:06:38.801 "strip_size_kb": 64, 00:06:38.801 "state": "offline", 00:06:38.801 "raid_level": "raid0", 00:06:38.801 "superblock": true, 00:06:38.801 "num_base_bdevs": 2, 00:06:38.801 "num_base_bdevs_discovered": 1, 00:06:38.801 "num_base_bdevs_operational": 1, 00:06:38.801 "base_bdevs_list": [ 00:06:38.801 { 00:06:38.801 "name": null, 00:06:38.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.802 "is_configured": false, 00:06:38.802 "data_offset": 0, 00:06:38.802 "data_size": 63488 00:06:38.802 }, 00:06:38.802 { 00:06:38.802 "name": "BaseBdev2", 00:06:38.802 "uuid": "7ae7bd22-e42d-4068-9e08-cce55b96d174", 00:06:38.802 "is_configured": true, 00:06:38.802 "data_offset": 2048, 00:06:38.802 "data_size": 63488 00:06:38.802 } 00:06:38.802 ] 00:06:38.802 }' 00:06:38.802 23:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.802 23:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.061 [2024-11-02 23:46:33.130839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:39.061 [2024-11-02 23:46:33.130904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.061 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72156 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72156 ']' 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72156 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72156 00:06:39.321 killing process with pid 72156 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72156' 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72156 00:06:39.321 [2024-11-02 23:46:33.242466] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:39.321 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72156 00:06:39.321 [2024-11-02 23:46:33.243543] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.581 23:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:39.581 00:06:39.581 real 0m3.870s 00:06:39.581 user 0m6.132s 00:06:39.581 sys 0m0.772s 00:06:39.581 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.581 ************************************ 00:06:39.581 END TEST raid_state_function_test_sb 00:06:39.581 ************************************ 00:06:39.581 23:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.581 23:46:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:39.581 23:46:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:39.581 23:46:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.581 23:46:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.581 ************************************ 00:06:39.581 START TEST raid_superblock_test 00:06:39.581 ************************************ 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72392 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72392 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72392 ']' 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:39.581 23:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.581 [2024-11-02 23:46:33.617715] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:39.581 [2024-11-02 23:46:33.617858] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72392 ] 00:06:39.841 [2024-11-02 23:46:33.772002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.841 [2024-11-02 23:46:33.801386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.841 [2024-11-02 23:46:33.844141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:39.841 [2024-11-02 23:46:33.844177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.417 malloc1 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.417 [2024-11-02 23:46:34.479027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:40.417 [2024-11-02 23:46:34.479170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.417 [2024-11-02 23:46:34.479210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:40.417 [2024-11-02 23:46:34.479244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.417 [2024-11-02 23:46:34.481401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.417 [2024-11-02 23:46:34.481485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:40.417 pt1 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.417 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.418 malloc2 00:06:40.418 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.418 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:40.418 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.418 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 [2024-11-02 23:46:34.511903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:40.682 [2024-11-02 23:46:34.511973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.682 [2024-11-02 23:46:34.511992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:40.682 [2024-11-02 23:46:34.512003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.682 [2024-11-02 23:46:34.514201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.682 [2024-11-02 23:46:34.514242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:40.682 pt2 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.682 [2024-11-02 23:46:34.523953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:40.682 [2024-11-02 23:46:34.525890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:40.682 [2024-11-02 23:46:34.526051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:40.682 [2024-11-02 23:46:34.526066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:40.682 [2024-11-02 23:46:34.526363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:40.682 [2024-11-02 23:46:34.526523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:40.682 [2024-11-02 23:46:34.526534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:06:40.682 [2024-11-02 23:46:34.526693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.682 "name": "raid_bdev1", 00:06:40.682 "uuid": "04f942af-3813-4e76-a55e-c08c440e2d39", 00:06:40.682 "strip_size_kb": 64, 00:06:40.682 "state": "online", 00:06:40.682 "raid_level": "raid0", 00:06:40.682 "superblock": true, 00:06:40.682 "num_base_bdevs": 2, 00:06:40.682 "num_base_bdevs_discovered": 2, 00:06:40.682 "num_base_bdevs_operational": 2, 00:06:40.682 "base_bdevs_list": [ 00:06:40.682 { 00:06:40.682 "name": "pt1", 00:06:40.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:40.682 "is_configured": true, 00:06:40.682 "data_offset": 2048, 00:06:40.682 "data_size": 63488 00:06:40.682 }, 00:06:40.682 { 00:06:40.682 "name": "pt2", 00:06:40.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:40.682 "is_configured": true, 00:06:40.682 "data_offset": 2048, 00:06:40.682 "data_size": 63488 00:06:40.682 } 00:06:40.682 ] 00:06:40.682 }' 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.682 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.942 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:40.942 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:40.942 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:40.942 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:40.942 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:40.942 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:40.942 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:40.942 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.942 23:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.942 23:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:40.942 [2024-11-02 23:46:35.003352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.942 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.201 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:41.201 "name": "raid_bdev1", 00:06:41.201 "aliases": [ 00:06:41.201 "04f942af-3813-4e76-a55e-c08c440e2d39" 00:06:41.201 ], 00:06:41.201 "product_name": "Raid Volume", 00:06:41.201 "block_size": 512, 00:06:41.201 "num_blocks": 126976, 00:06:41.201 "uuid": "04f942af-3813-4e76-a55e-c08c440e2d39", 00:06:41.201 "assigned_rate_limits": { 00:06:41.201 "rw_ios_per_sec": 0, 00:06:41.201 "rw_mbytes_per_sec": 0, 00:06:41.201 "r_mbytes_per_sec": 0, 00:06:41.201 "w_mbytes_per_sec": 0 00:06:41.201 }, 00:06:41.201 "claimed": false, 00:06:41.201 "zoned": false, 00:06:41.201 "supported_io_types": { 00:06:41.201 "read": true, 00:06:41.201 "write": true, 00:06:41.201 "unmap": true, 00:06:41.201 "flush": true, 00:06:41.201 "reset": true, 00:06:41.201 "nvme_admin": false, 00:06:41.201 "nvme_io": false, 00:06:41.201 "nvme_io_md": false, 00:06:41.201 "write_zeroes": true, 00:06:41.201 "zcopy": false, 00:06:41.201 "get_zone_info": false, 00:06:41.201 "zone_management": false, 00:06:41.201 "zone_append": false, 00:06:41.201 "compare": false, 00:06:41.201 "compare_and_write": false, 00:06:41.201 "abort": false, 00:06:41.201 "seek_hole": false, 00:06:41.201 "seek_data": false, 00:06:41.201 "copy": false, 00:06:41.201 "nvme_iov_md": false 00:06:41.201 }, 00:06:41.201 "memory_domains": [ 00:06:41.201 { 00:06:41.201 "dma_device_id": "system", 00:06:41.202 "dma_device_type": 1 00:06:41.202 }, 00:06:41.202 { 00:06:41.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.202 "dma_device_type": 2 00:06:41.202 }, 00:06:41.202 { 00:06:41.202 "dma_device_id": "system", 00:06:41.202 "dma_device_type": 1 00:06:41.202 }, 00:06:41.202 { 00:06:41.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.202 "dma_device_type": 2 00:06:41.202 } 00:06:41.202 ], 00:06:41.202 "driver_specific": { 00:06:41.202 "raid": { 00:06:41.202 "uuid": "04f942af-3813-4e76-a55e-c08c440e2d39", 00:06:41.202 "strip_size_kb": 64, 00:06:41.202 "state": "online", 00:06:41.202 "raid_level": "raid0", 00:06:41.202 "superblock": true, 00:06:41.202 "num_base_bdevs": 2, 00:06:41.202 "num_base_bdevs_discovered": 2, 00:06:41.202 "num_base_bdevs_operational": 2, 00:06:41.202 "base_bdevs_list": [ 00:06:41.202 { 00:06:41.202 "name": "pt1", 00:06:41.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:41.202 "is_configured": true, 00:06:41.202 "data_offset": 2048, 00:06:41.202 "data_size": 63488 00:06:41.202 }, 00:06:41.202 { 00:06:41.202 "name": "pt2", 00:06:41.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:41.202 "is_configured": true, 00:06:41.202 "data_offset": 2048, 00:06:41.202 "data_size": 63488 00:06:41.202 } 00:06:41.202 ] 00:06:41.202 } 00:06:41.202 } 00:06:41.202 }' 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:41.202 pt2' 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:41.202 [2024-11-02 23:46:35.254917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.202 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=04f942af-3813-4e76-a55e-c08c440e2d39 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 04f942af-3813-4e76-a55e-c08c440e2d39 ']' 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.462 [2024-11-02 23:46:35.302559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:41.462 [2024-11-02 23:46:35.302660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:41.462 [2024-11-02 23:46:35.302836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.462 [2024-11-02 23:46:35.302935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.462 [2024-11-02 23:46:35.302987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.462 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.462 [2024-11-02 23:46:35.450343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:41.462 [2024-11-02 23:46:35.452307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:41.462 [2024-11-02 23:46:35.452380] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:41.462 [2024-11-02 23:46:35.452438] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:41.462 [2024-11-02 23:46:35.452458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:41.462 [2024-11-02 23:46:35.452467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:06:41.462 request: 00:06:41.462 { 00:06:41.462 "name": "raid_bdev1", 00:06:41.462 "raid_level": "raid0", 00:06:41.462 "base_bdevs": [ 00:06:41.462 "malloc1", 00:06:41.463 "malloc2" 00:06:41.463 ], 00:06:41.463 "strip_size_kb": 64, 00:06:41.463 "superblock": false, 00:06:41.463 "method": "bdev_raid_create", 00:06:41.463 "req_id": 1 00:06:41.463 } 00:06:41.463 Got JSON-RPC error response 00:06:41.463 response: 00:06:41.463 { 00:06:41.463 "code": -17, 00:06:41.463 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:41.463 } 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.463 [2024-11-02 23:46:35.518218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:41.463 [2024-11-02 23:46:35.518352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.463 [2024-11-02 23:46:35.518397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:41.463 [2024-11-02 23:46:35.518450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.463 [2024-11-02 23:46:35.520680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.463 [2024-11-02 23:46:35.520772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:41.463 [2024-11-02 23:46:35.520897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:41.463 [2024-11-02 23:46:35.520972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:41.463 pt1 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.463 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.723 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.723 "name": "raid_bdev1", 00:06:41.723 "uuid": "04f942af-3813-4e76-a55e-c08c440e2d39", 00:06:41.723 "strip_size_kb": 64, 00:06:41.723 "state": "configuring", 00:06:41.723 "raid_level": "raid0", 00:06:41.723 "superblock": true, 00:06:41.723 "num_base_bdevs": 2, 00:06:41.723 "num_base_bdevs_discovered": 1, 00:06:41.723 "num_base_bdevs_operational": 2, 00:06:41.723 "base_bdevs_list": [ 00:06:41.723 { 00:06:41.723 "name": "pt1", 00:06:41.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:41.723 "is_configured": true, 00:06:41.723 "data_offset": 2048, 00:06:41.723 "data_size": 63488 00:06:41.723 }, 00:06:41.723 { 00:06:41.723 "name": null, 00:06:41.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:41.723 "is_configured": false, 00:06:41.723 "data_offset": 2048, 00:06:41.723 "data_size": 63488 00:06:41.723 } 00:06:41.723 ] 00:06:41.723 }' 00:06:41.723 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.723 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.982 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:41.982 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:41.982 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:41.982 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:41.982 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.982 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.982 [2024-11-02 23:46:35.981395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:41.983 [2024-11-02 23:46:35.981540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.983 [2024-11-02 23:46:35.981591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:41.983 [2024-11-02 23:46:35.981621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.983 [2024-11-02 23:46:35.982083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.983 [2024-11-02 23:46:35.982138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:41.983 [2024-11-02 23:46:35.982244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:41.983 [2024-11-02 23:46:35.982292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:41.983 [2024-11-02 23:46:35.982421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:41.983 [2024-11-02 23:46:35.982477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:41.983 [2024-11-02 23:46:35.982757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:06:41.983 [2024-11-02 23:46:35.982912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:41.983 [2024-11-02 23:46:35.982959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:41.983 [2024-11-02 23:46:35.983102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.983 pt2 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.983 23:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.983 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.983 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.983 "name": "raid_bdev1", 00:06:41.983 "uuid": "04f942af-3813-4e76-a55e-c08c440e2d39", 00:06:41.983 "strip_size_kb": 64, 00:06:41.983 "state": "online", 00:06:41.983 "raid_level": "raid0", 00:06:41.983 "superblock": true, 00:06:41.983 "num_base_bdevs": 2, 00:06:41.983 "num_base_bdevs_discovered": 2, 00:06:41.983 "num_base_bdevs_operational": 2, 00:06:41.983 "base_bdevs_list": [ 00:06:41.983 { 00:06:41.983 "name": "pt1", 00:06:41.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:41.983 "is_configured": true, 00:06:41.983 "data_offset": 2048, 00:06:41.983 "data_size": 63488 00:06:41.983 }, 00:06:41.983 { 00:06:41.983 "name": "pt2", 00:06:41.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:41.983 "is_configured": true, 00:06:41.983 "data_offset": 2048, 00:06:41.983 "data_size": 63488 00:06:41.983 } 00:06:41.983 ] 00:06:41.983 }' 00:06:41.983 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.983 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.553 [2024-11-02 23:46:36.444922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:42.553 "name": "raid_bdev1", 00:06:42.553 "aliases": [ 00:06:42.553 "04f942af-3813-4e76-a55e-c08c440e2d39" 00:06:42.553 ], 00:06:42.553 "product_name": "Raid Volume", 00:06:42.553 "block_size": 512, 00:06:42.553 "num_blocks": 126976, 00:06:42.553 "uuid": "04f942af-3813-4e76-a55e-c08c440e2d39", 00:06:42.553 "assigned_rate_limits": { 00:06:42.553 "rw_ios_per_sec": 0, 00:06:42.553 "rw_mbytes_per_sec": 0, 00:06:42.553 "r_mbytes_per_sec": 0, 00:06:42.553 "w_mbytes_per_sec": 0 00:06:42.553 }, 00:06:42.553 "claimed": false, 00:06:42.553 "zoned": false, 00:06:42.553 "supported_io_types": { 00:06:42.553 "read": true, 00:06:42.553 "write": true, 00:06:42.553 "unmap": true, 00:06:42.553 "flush": true, 00:06:42.553 "reset": true, 00:06:42.553 "nvme_admin": false, 00:06:42.553 "nvme_io": false, 00:06:42.553 "nvme_io_md": false, 00:06:42.553 "write_zeroes": true, 00:06:42.553 "zcopy": false, 00:06:42.553 "get_zone_info": false, 00:06:42.553 "zone_management": false, 00:06:42.553 "zone_append": false, 00:06:42.553 "compare": false, 00:06:42.553 "compare_and_write": false, 00:06:42.553 "abort": false, 00:06:42.553 "seek_hole": false, 00:06:42.553 "seek_data": false, 00:06:42.553 "copy": false, 00:06:42.553 "nvme_iov_md": false 00:06:42.553 }, 00:06:42.553 "memory_domains": [ 00:06:42.553 { 00:06:42.553 "dma_device_id": "system", 00:06:42.553 "dma_device_type": 1 00:06:42.553 }, 00:06:42.553 { 00:06:42.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.553 "dma_device_type": 2 00:06:42.553 }, 00:06:42.553 { 00:06:42.553 "dma_device_id": "system", 00:06:42.553 "dma_device_type": 1 00:06:42.553 }, 00:06:42.553 { 00:06:42.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.553 "dma_device_type": 2 00:06:42.553 } 00:06:42.553 ], 00:06:42.553 "driver_specific": { 00:06:42.553 "raid": { 00:06:42.553 "uuid": "04f942af-3813-4e76-a55e-c08c440e2d39", 00:06:42.553 "strip_size_kb": 64, 00:06:42.553 "state": "online", 00:06:42.553 "raid_level": "raid0", 00:06:42.553 "superblock": true, 00:06:42.553 "num_base_bdevs": 2, 00:06:42.553 "num_base_bdevs_discovered": 2, 00:06:42.553 "num_base_bdevs_operational": 2, 00:06:42.553 "base_bdevs_list": [ 00:06:42.553 { 00:06:42.553 "name": "pt1", 00:06:42.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:42.553 "is_configured": true, 00:06:42.553 "data_offset": 2048, 00:06:42.553 "data_size": 63488 00:06:42.553 }, 00:06:42.553 { 00:06:42.553 "name": "pt2", 00:06:42.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:42.553 "is_configured": true, 00:06:42.553 "data_offset": 2048, 00:06:42.553 "data_size": 63488 00:06:42.553 } 00:06:42.553 ] 00:06:42.553 } 00:06:42.553 } 00:06:42.553 }' 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:42.553 pt2' 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:42.553 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.814 [2024-11-02 23:46:36.704485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 04f942af-3813-4e76-a55e-c08c440e2d39 '!=' 04f942af-3813-4e76-a55e-c08c440e2d39 ']' 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72392 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72392 ']' 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72392 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72392 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72392' 00:06:42.814 killing process with pid 72392 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72392 00:06:42.814 [2024-11-02 23:46:36.792984] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.814 [2024-11-02 23:46:36.793161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.814 23:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72392 00:06:42.814 [2024-11-02 23:46:36.793246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.814 [2024-11-02 23:46:36.793263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:42.814 [2024-11-02 23:46:36.816492] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.074 23:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:43.074 00:06:43.074 real 0m3.488s 00:06:43.074 user 0m5.455s 00:06:43.074 sys 0m0.746s 00:06:43.074 23:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.074 23:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.074 ************************************ 00:06:43.074 END TEST raid_superblock_test 00:06:43.074 ************************************ 00:06:43.074 23:46:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:43.074 23:46:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:43.074 23:46:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.074 23:46:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.074 ************************************ 00:06:43.074 START TEST raid_read_error_test 00:06:43.074 ************************************ 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PTST533fOS 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72598 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72598 00:06:43.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 72598 ']' 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.074 23:46:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.334 [2024-11-02 23:46:37.193426] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:43.334 [2024-11-02 23:46:37.193558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72598 ] 00:06:43.334 [2024-11-02 23:46:37.348690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.334 [2024-11-02 23:46:37.377892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.334 [2024-11-02 23:46:37.420212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.334 [2024-11-02 23:46:37.420336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.274 BaseBdev1_malloc 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.274 true 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.274 [2024-11-02 23:46:38.070956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:44.274 [2024-11-02 23:46:38.071025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.274 [2024-11-02 23:46:38.071053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:06:44.274 [2024-11-02 23:46:38.071068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.274 [2024-11-02 23:46:38.073310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.274 [2024-11-02 23:46:38.073349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:44.274 BaseBdev1 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.274 BaseBdev2_malloc 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.274 true 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.274 [2024-11-02 23:46:38.111760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:44.274 [2024-11-02 23:46:38.111894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.274 [2024-11-02 23:46:38.111920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:44.274 [2024-11-02 23:46:38.111938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.274 [2024-11-02 23:46:38.114053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.274 [2024-11-02 23:46:38.114093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:44.274 BaseBdev2 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.274 [2024-11-02 23:46:38.123817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:44.274 [2024-11-02 23:46:38.125655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:44.274 [2024-11-02 23:46:38.125895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:44.274 [2024-11-02 23:46:38.125921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:44.274 [2024-11-02 23:46:38.126227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:44.274 [2024-11-02 23:46:38.126386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:44.274 [2024-11-02 23:46:38.126407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:44.274 [2024-11-02 23:46:38.126598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.274 "name": "raid_bdev1", 00:06:44.274 "uuid": "69da2ad5-2fd4-45b3-a9cf-6329cf75303c", 00:06:44.274 "strip_size_kb": 64, 00:06:44.274 "state": "online", 00:06:44.274 "raid_level": "raid0", 00:06:44.274 "superblock": true, 00:06:44.274 "num_base_bdevs": 2, 00:06:44.274 "num_base_bdevs_discovered": 2, 00:06:44.274 "num_base_bdevs_operational": 2, 00:06:44.274 "base_bdevs_list": [ 00:06:44.274 { 00:06:44.274 "name": "BaseBdev1", 00:06:44.274 "uuid": "57829674-70c0-5b80-9cc9-c9bbe6bf297a", 00:06:44.274 "is_configured": true, 00:06:44.274 "data_offset": 2048, 00:06:44.274 "data_size": 63488 00:06:44.274 }, 00:06:44.274 { 00:06:44.274 "name": "BaseBdev2", 00:06:44.274 "uuid": "14544879-07ad-5815-bf55-387decfe95e5", 00:06:44.274 "is_configured": true, 00:06:44.274 "data_offset": 2048, 00:06:44.274 "data_size": 63488 00:06:44.274 } 00:06:44.274 ] 00:06:44.274 }' 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.274 23:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.534 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:44.534 23:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:44.804 [2024-11-02 23:46:38.639301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.747 "name": "raid_bdev1", 00:06:45.747 "uuid": "69da2ad5-2fd4-45b3-a9cf-6329cf75303c", 00:06:45.747 "strip_size_kb": 64, 00:06:45.747 "state": "online", 00:06:45.747 "raid_level": "raid0", 00:06:45.747 "superblock": true, 00:06:45.747 "num_base_bdevs": 2, 00:06:45.747 "num_base_bdevs_discovered": 2, 00:06:45.747 "num_base_bdevs_operational": 2, 00:06:45.747 "base_bdevs_list": [ 00:06:45.747 { 00:06:45.747 "name": "BaseBdev1", 00:06:45.747 "uuid": "57829674-70c0-5b80-9cc9-c9bbe6bf297a", 00:06:45.747 "is_configured": true, 00:06:45.747 "data_offset": 2048, 00:06:45.747 "data_size": 63488 00:06:45.747 }, 00:06:45.747 { 00:06:45.747 "name": "BaseBdev2", 00:06:45.747 "uuid": "14544879-07ad-5815-bf55-387decfe95e5", 00:06:45.747 "is_configured": true, 00:06:45.747 "data_offset": 2048, 00:06:45.747 "data_size": 63488 00:06:45.747 } 00:06:45.747 ] 00:06:45.747 }' 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.747 23:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.007 23:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:46.007 23:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.007 23:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.007 [2024-11-02 23:46:40.003353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:46.007 [2024-11-02 23:46:40.003389] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:46.007 [2024-11-02 23:46:40.005865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.007 [2024-11-02 23:46:40.005916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.007 [2024-11-02 23:46:40.005953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:46.007 [2024-11-02 23:46:40.005962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:46.007 { 00:06:46.007 "results": [ 00:06:46.007 { 00:06:46.007 "job": "raid_bdev1", 00:06:46.007 "core_mask": "0x1", 00:06:46.007 "workload": "randrw", 00:06:46.007 "percentage": 50, 00:06:46.007 "status": "finished", 00:06:46.007 "queue_depth": 1, 00:06:46.007 "io_size": 131072, 00:06:46.007 "runtime": 1.36478, 00:06:46.007 "iops": 16571.168979615763, 00:06:46.007 "mibps": 2071.3961224519703, 00:06:46.007 "io_failed": 1, 00:06:46.007 "io_timeout": 0, 00:06:46.007 "avg_latency_us": 83.83754060641095, 00:06:46.007 "min_latency_us": 24.705676855895195, 00:06:46.007 "max_latency_us": 1423.7624454148472 00:06:46.007 } 00:06:46.007 ], 00:06:46.007 "core_count": 1 00:06:46.007 } 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72598 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 72598 ']' 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 72598 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72598 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:46.007 killing process with pid 72598 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72598' 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 72598 00:06:46.007 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 72598 00:06:46.007 [2024-11-02 23:46:40.050949] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.007 [2024-11-02 23:46:40.066959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.267 23:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:46.267 23:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PTST533fOS 00:06:46.267 23:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:46.267 23:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:06:46.267 23:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:46.267 ************************************ 00:06:46.267 END TEST raid_read_error_test 00:06:46.267 ************************************ 00:06:46.267 23:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:46.267 23:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:46.267 23:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:06:46.267 00:06:46.267 real 0m3.177s 00:06:46.267 user 0m4.065s 00:06:46.267 sys 0m0.501s 00:06:46.267 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:46.267 23:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.267 23:46:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:46.267 23:46:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:46.267 23:46:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.267 23:46:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.267 ************************************ 00:06:46.267 START TEST raid_write_error_test 00:06:46.267 ************************************ 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MBtpB4dDx9 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72727 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72727 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 72727 ']' 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:46.267 23:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.527 23:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:46.527 23:46:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.527 [2024-11-02 23:46:40.439082] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:46.527 [2024-11-02 23:46:40.439294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72727 ] 00:06:46.527 [2024-11-02 23:46:40.593677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.786 [2024-11-02 23:46:40.623078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.786 [2024-11-02 23:46:40.666111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.786 [2024-11-02 23:46:40.666242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.356 BaseBdev1_malloc 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.356 true 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.356 [2024-11-02 23:46:41.325063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:47.356 [2024-11-02 23:46:41.325151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.356 [2024-11-02 23:46:41.325182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:06:47.356 [2024-11-02 23:46:41.325192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.356 [2024-11-02 23:46:41.327236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.356 [2024-11-02 23:46:41.327273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:47.356 BaseBdev1 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.356 BaseBdev2_malloc 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.356 true 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.356 [2024-11-02 23:46:41.365451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:47.356 [2024-11-02 23:46:41.365520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.356 [2024-11-02 23:46:41.365540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:47.356 [2024-11-02 23:46:41.365557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.356 [2024-11-02 23:46:41.367662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.356 [2024-11-02 23:46:41.367712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:47.356 BaseBdev2 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.356 [2024-11-02 23:46:41.377507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:47.356 [2024-11-02 23:46:41.379375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:47.356 [2024-11-02 23:46:41.379638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:47.356 [2024-11-02 23:46:41.379656] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:47.356 [2024-11-02 23:46:41.379976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:47.356 [2024-11-02 23:46:41.380119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:47.356 [2024-11-02 23:46:41.380137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:47.356 [2024-11-02 23:46:41.380300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:47.356 "name": "raid_bdev1", 00:06:47.356 "uuid": "9f24c3e8-946d-4209-80e8-d89f48675187", 00:06:47.356 "strip_size_kb": 64, 00:06:47.356 "state": "online", 00:06:47.356 "raid_level": "raid0", 00:06:47.356 "superblock": true, 00:06:47.356 "num_base_bdevs": 2, 00:06:47.356 "num_base_bdevs_discovered": 2, 00:06:47.356 "num_base_bdevs_operational": 2, 00:06:47.356 "base_bdevs_list": [ 00:06:47.356 { 00:06:47.356 "name": "BaseBdev1", 00:06:47.356 "uuid": "b125ee4b-e04a-5990-84a9-09b398709806", 00:06:47.356 "is_configured": true, 00:06:47.356 "data_offset": 2048, 00:06:47.356 "data_size": 63488 00:06:47.356 }, 00:06:47.356 { 00:06:47.356 "name": "BaseBdev2", 00:06:47.356 "uuid": "f92cc312-7720-56e7-a986-00afad3fe4fc", 00:06:47.356 "is_configured": true, 00:06:47.356 "data_offset": 2048, 00:06:47.356 "data_size": 63488 00:06:47.356 } 00:06:47.356 ] 00:06:47.356 }' 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:47.356 23:46:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.926 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:47.926 23:46:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:47.926 [2024-11-02 23:46:41.928908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.864 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.865 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:48.865 23:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.865 23:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.865 23:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.865 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.865 "name": "raid_bdev1", 00:06:48.865 "uuid": "9f24c3e8-946d-4209-80e8-d89f48675187", 00:06:48.865 "strip_size_kb": 64, 00:06:48.865 "state": "online", 00:06:48.865 "raid_level": "raid0", 00:06:48.865 "superblock": true, 00:06:48.865 "num_base_bdevs": 2, 00:06:48.865 "num_base_bdevs_discovered": 2, 00:06:48.865 "num_base_bdevs_operational": 2, 00:06:48.865 "base_bdevs_list": [ 00:06:48.865 { 00:06:48.865 "name": "BaseBdev1", 00:06:48.865 "uuid": "b125ee4b-e04a-5990-84a9-09b398709806", 00:06:48.865 "is_configured": true, 00:06:48.865 "data_offset": 2048, 00:06:48.865 "data_size": 63488 00:06:48.865 }, 00:06:48.865 { 00:06:48.865 "name": "BaseBdev2", 00:06:48.865 "uuid": "f92cc312-7720-56e7-a986-00afad3fe4fc", 00:06:48.865 "is_configured": true, 00:06:48.865 "data_offset": 2048, 00:06:48.865 "data_size": 63488 00:06:48.865 } 00:06:48.865 ] 00:06:48.865 }' 00:06:48.865 23:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.865 23:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.433 23:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:49.433 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.433 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.433 [2024-11-02 23:46:43.328836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:49.434 [2024-11-02 23:46:43.328923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:49.434 [2024-11-02 23:46:43.331490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.434 [2024-11-02 23:46:43.331603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.434 [2024-11-02 23:46:43.331660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.434 [2024-11-02 23:46:43.331724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:49.434 { 00:06:49.434 "results": [ 00:06:49.434 { 00:06:49.434 "job": "raid_bdev1", 00:06:49.434 "core_mask": "0x1", 00:06:49.434 "workload": "randrw", 00:06:49.434 "percentage": 50, 00:06:49.434 "status": "finished", 00:06:49.434 "queue_depth": 1, 00:06:49.434 "io_size": 131072, 00:06:49.434 "runtime": 1.400854, 00:06:49.434 "iops": 17071.7291023904, 00:06:49.434 "mibps": 2133.9661377988, 00:06:49.434 "io_failed": 1, 00:06:49.434 "io_timeout": 0, 00:06:49.434 "avg_latency_us": 81.12575674248517, 00:06:49.434 "min_latency_us": 24.705676855895195, 00:06:49.434 "max_latency_us": 1366.5257641921398 00:06:49.434 } 00:06:49.434 ], 00:06:49.434 "core_count": 1 00:06:49.434 } 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72727 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 72727 ']' 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 72727 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72727 00:06:49.434 killing process with pid 72727 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72727' 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 72727 00:06:49.434 [2024-11-02 23:46:43.364681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.434 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 72727 00:06:49.434 [2024-11-02 23:46:43.380472] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.697 23:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:49.697 23:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MBtpB4dDx9 00:06:49.697 23:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:49.697 23:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:06:49.697 23:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:49.697 23:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:49.697 23:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:49.697 ************************************ 00:06:49.697 END TEST raid_write_error_test 00:06:49.697 ************************************ 00:06:49.697 23:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:06:49.697 00:06:49.697 real 0m3.255s 00:06:49.697 user 0m4.217s 00:06:49.697 sys 0m0.485s 00:06:49.697 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.697 23:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.697 23:46:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:49.697 23:46:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:49.697 23:46:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:49.697 23:46:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.697 23:46:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.697 ************************************ 00:06:49.697 START TEST raid_state_function_test 00:06:49.697 ************************************ 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72854 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72854' 00:06:49.697 Process raid pid: 72854 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72854 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 72854 ']' 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:49.697 23:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.698 [2024-11-02 23:46:43.752966] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:49.698 [2024-11-02 23:46:43.753188] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.962 [2024-11-02 23:46:43.910484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.962 [2024-11-02 23:46:43.939491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.962 [2024-11-02 23:46:43.981300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.962 [2024-11-02 23:46:43.981409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.531 [2024-11-02 23:46:44.586477] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:50.531 [2024-11-02 23:46:44.586608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:50.531 [2024-11-02 23:46:44.586642] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:50.531 [2024-11-02 23:46:44.586666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.531 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.531 "name": "Existed_Raid", 00:06:50.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.531 "strip_size_kb": 64, 00:06:50.531 "state": "configuring", 00:06:50.531 "raid_level": "concat", 00:06:50.531 "superblock": false, 00:06:50.531 "num_base_bdevs": 2, 00:06:50.531 "num_base_bdevs_discovered": 0, 00:06:50.531 "num_base_bdevs_operational": 2, 00:06:50.531 "base_bdevs_list": [ 00:06:50.531 { 00:06:50.531 "name": "BaseBdev1", 00:06:50.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.532 "is_configured": false, 00:06:50.532 "data_offset": 0, 00:06:50.532 "data_size": 0 00:06:50.532 }, 00:06:50.532 { 00:06:50.532 "name": "BaseBdev2", 00:06:50.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.532 "is_configured": false, 00:06:50.532 "data_offset": 0, 00:06:50.532 "data_size": 0 00:06:50.532 } 00:06:50.532 ] 00:06:50.532 }' 00:06:50.532 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.532 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.101 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:51.101 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.101 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.101 [2024-11-02 23:46:44.993733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:51.102 [2024-11-02 23:46:44.993871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:51.102 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.102 23:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:51.102 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.102 23:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.102 [2024-11-02 23:46:45.001689] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:51.102 [2024-11-02 23:46:45.001733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:51.102 [2024-11-02 23:46:45.001752] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.102 [2024-11-02 23:46:45.001772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.102 [2024-11-02 23:46:45.018359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.102 BaseBdev1 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.102 [ 00:06:51.102 { 00:06:51.102 "name": "BaseBdev1", 00:06:51.102 "aliases": [ 00:06:51.102 "a4e0b1bb-be5b-422c-9515-f7b0c762ec75" 00:06:51.102 ], 00:06:51.102 "product_name": "Malloc disk", 00:06:51.102 "block_size": 512, 00:06:51.102 "num_blocks": 65536, 00:06:51.102 "uuid": "a4e0b1bb-be5b-422c-9515-f7b0c762ec75", 00:06:51.102 "assigned_rate_limits": { 00:06:51.102 "rw_ios_per_sec": 0, 00:06:51.102 "rw_mbytes_per_sec": 0, 00:06:51.102 "r_mbytes_per_sec": 0, 00:06:51.102 "w_mbytes_per_sec": 0 00:06:51.102 }, 00:06:51.102 "claimed": true, 00:06:51.102 "claim_type": "exclusive_write", 00:06:51.102 "zoned": false, 00:06:51.102 "supported_io_types": { 00:06:51.102 "read": true, 00:06:51.102 "write": true, 00:06:51.102 "unmap": true, 00:06:51.102 "flush": true, 00:06:51.102 "reset": true, 00:06:51.102 "nvme_admin": false, 00:06:51.102 "nvme_io": false, 00:06:51.102 "nvme_io_md": false, 00:06:51.102 "write_zeroes": true, 00:06:51.102 "zcopy": true, 00:06:51.102 "get_zone_info": false, 00:06:51.102 "zone_management": false, 00:06:51.102 "zone_append": false, 00:06:51.102 "compare": false, 00:06:51.102 "compare_and_write": false, 00:06:51.102 "abort": true, 00:06:51.102 "seek_hole": false, 00:06:51.102 "seek_data": false, 00:06:51.102 "copy": true, 00:06:51.102 "nvme_iov_md": false 00:06:51.102 }, 00:06:51.102 "memory_domains": [ 00:06:51.102 { 00:06:51.102 "dma_device_id": "system", 00:06:51.102 "dma_device_type": 1 00:06:51.102 }, 00:06:51.102 { 00:06:51.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.102 "dma_device_type": 2 00:06:51.102 } 00:06:51.102 ], 00:06:51.102 "driver_specific": {} 00:06:51.102 } 00:06:51.102 ] 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.102 "name": "Existed_Raid", 00:06:51.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.102 "strip_size_kb": 64, 00:06:51.102 "state": "configuring", 00:06:51.102 "raid_level": "concat", 00:06:51.102 "superblock": false, 00:06:51.102 "num_base_bdevs": 2, 00:06:51.102 "num_base_bdevs_discovered": 1, 00:06:51.102 "num_base_bdevs_operational": 2, 00:06:51.102 "base_bdevs_list": [ 00:06:51.102 { 00:06:51.102 "name": "BaseBdev1", 00:06:51.102 "uuid": "a4e0b1bb-be5b-422c-9515-f7b0c762ec75", 00:06:51.102 "is_configured": true, 00:06:51.102 "data_offset": 0, 00:06:51.102 "data_size": 65536 00:06:51.102 }, 00:06:51.102 { 00:06:51.102 "name": "BaseBdev2", 00:06:51.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.102 "is_configured": false, 00:06:51.102 "data_offset": 0, 00:06:51.102 "data_size": 0 00:06:51.102 } 00:06:51.102 ] 00:06:51.102 }' 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.102 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.670 [2024-11-02 23:46:45.477633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:51.670 [2024-11-02 23:46:45.477768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.670 [2024-11-02 23:46:45.489628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.670 [2024-11-02 23:46:45.491562] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.670 [2024-11-02 23:46:45.491669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.670 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.670 "name": "Existed_Raid", 00:06:51.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.670 "strip_size_kb": 64, 00:06:51.671 "state": "configuring", 00:06:51.671 "raid_level": "concat", 00:06:51.671 "superblock": false, 00:06:51.671 "num_base_bdevs": 2, 00:06:51.671 "num_base_bdevs_discovered": 1, 00:06:51.671 "num_base_bdevs_operational": 2, 00:06:51.671 "base_bdevs_list": [ 00:06:51.671 { 00:06:51.671 "name": "BaseBdev1", 00:06:51.671 "uuid": "a4e0b1bb-be5b-422c-9515-f7b0c762ec75", 00:06:51.671 "is_configured": true, 00:06:51.671 "data_offset": 0, 00:06:51.671 "data_size": 65536 00:06:51.671 }, 00:06:51.671 { 00:06:51.671 "name": "BaseBdev2", 00:06:51.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.671 "is_configured": false, 00:06:51.671 "data_offset": 0, 00:06:51.671 "data_size": 0 00:06:51.671 } 00:06:51.671 ] 00:06:51.671 }' 00:06:51.671 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.671 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.930 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:51.930 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.930 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.930 BaseBdev2 00:06:51.930 [2024-11-02 23:46:45.959976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:51.930 [2024-11-02 23:46:45.960023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:51.930 [2024-11-02 23:46:45.960038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:51.931 [2024-11-02 23:46:45.960316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:51.931 [2024-11-02 23:46:45.960450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:51.931 [2024-11-02 23:46:45.960464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:51.931 [2024-11-02 23:46:45.960680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.931 [ 00:06:51.931 { 00:06:51.931 "name": "BaseBdev2", 00:06:51.931 "aliases": [ 00:06:51.931 "67724b72-6c51-455c-bb64-9b1a14dba3c0" 00:06:51.931 ], 00:06:51.931 "product_name": "Malloc disk", 00:06:51.931 "block_size": 512, 00:06:51.931 "num_blocks": 65536, 00:06:51.931 "uuid": "67724b72-6c51-455c-bb64-9b1a14dba3c0", 00:06:51.931 "assigned_rate_limits": { 00:06:51.931 "rw_ios_per_sec": 0, 00:06:51.931 "rw_mbytes_per_sec": 0, 00:06:51.931 "r_mbytes_per_sec": 0, 00:06:51.931 "w_mbytes_per_sec": 0 00:06:51.931 }, 00:06:51.931 "claimed": true, 00:06:51.931 "claim_type": "exclusive_write", 00:06:51.931 "zoned": false, 00:06:51.931 "supported_io_types": { 00:06:51.931 "read": true, 00:06:51.931 "write": true, 00:06:51.931 "unmap": true, 00:06:51.931 "flush": true, 00:06:51.931 "reset": true, 00:06:51.931 "nvme_admin": false, 00:06:51.931 "nvme_io": false, 00:06:51.931 "nvme_io_md": false, 00:06:51.931 "write_zeroes": true, 00:06:51.931 "zcopy": true, 00:06:51.931 "get_zone_info": false, 00:06:51.931 "zone_management": false, 00:06:51.931 "zone_append": false, 00:06:51.931 "compare": false, 00:06:51.931 "compare_and_write": false, 00:06:51.931 "abort": true, 00:06:51.931 "seek_hole": false, 00:06:51.931 "seek_data": false, 00:06:51.931 "copy": true, 00:06:51.931 "nvme_iov_md": false 00:06:51.931 }, 00:06:51.931 "memory_domains": [ 00:06:51.931 { 00:06:51.931 "dma_device_id": "system", 00:06:51.931 "dma_device_type": 1 00:06:51.931 }, 00:06:51.931 { 00:06:51.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.931 "dma_device_type": 2 00:06:51.931 } 00:06:51.931 ], 00:06:51.931 "driver_specific": {} 00:06:51.931 } 00:06:51.931 ] 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.931 23:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.931 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.191 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.191 "name": "Existed_Raid", 00:06:52.191 "uuid": "12b27144-e922-4449-b7c9-1bdd1520ffa0", 00:06:52.191 "strip_size_kb": 64, 00:06:52.191 "state": "online", 00:06:52.191 "raid_level": "concat", 00:06:52.191 "superblock": false, 00:06:52.191 "num_base_bdevs": 2, 00:06:52.191 "num_base_bdevs_discovered": 2, 00:06:52.191 "num_base_bdevs_operational": 2, 00:06:52.191 "base_bdevs_list": [ 00:06:52.191 { 00:06:52.191 "name": "BaseBdev1", 00:06:52.191 "uuid": "a4e0b1bb-be5b-422c-9515-f7b0c762ec75", 00:06:52.191 "is_configured": true, 00:06:52.191 "data_offset": 0, 00:06:52.191 "data_size": 65536 00:06:52.191 }, 00:06:52.191 { 00:06:52.191 "name": "BaseBdev2", 00:06:52.191 "uuid": "67724b72-6c51-455c-bb64-9b1a14dba3c0", 00:06:52.191 "is_configured": true, 00:06:52.191 "data_offset": 0, 00:06:52.191 "data_size": 65536 00:06:52.191 } 00:06:52.191 ] 00:06:52.191 }' 00:06:52.191 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.191 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.450 [2024-11-02 23:46:46.443488] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.450 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:52.450 "name": "Existed_Raid", 00:06:52.450 "aliases": [ 00:06:52.450 "12b27144-e922-4449-b7c9-1bdd1520ffa0" 00:06:52.450 ], 00:06:52.450 "product_name": "Raid Volume", 00:06:52.450 "block_size": 512, 00:06:52.450 "num_blocks": 131072, 00:06:52.450 "uuid": "12b27144-e922-4449-b7c9-1bdd1520ffa0", 00:06:52.450 "assigned_rate_limits": { 00:06:52.450 "rw_ios_per_sec": 0, 00:06:52.450 "rw_mbytes_per_sec": 0, 00:06:52.450 "r_mbytes_per_sec": 0, 00:06:52.450 "w_mbytes_per_sec": 0 00:06:52.450 }, 00:06:52.450 "claimed": false, 00:06:52.450 "zoned": false, 00:06:52.450 "supported_io_types": { 00:06:52.450 "read": true, 00:06:52.450 "write": true, 00:06:52.450 "unmap": true, 00:06:52.450 "flush": true, 00:06:52.450 "reset": true, 00:06:52.450 "nvme_admin": false, 00:06:52.450 "nvme_io": false, 00:06:52.450 "nvme_io_md": false, 00:06:52.450 "write_zeroes": true, 00:06:52.450 "zcopy": false, 00:06:52.450 "get_zone_info": false, 00:06:52.450 "zone_management": false, 00:06:52.450 "zone_append": false, 00:06:52.450 "compare": false, 00:06:52.450 "compare_and_write": false, 00:06:52.450 "abort": false, 00:06:52.451 "seek_hole": false, 00:06:52.451 "seek_data": false, 00:06:52.451 "copy": false, 00:06:52.451 "nvme_iov_md": false 00:06:52.451 }, 00:06:52.451 "memory_domains": [ 00:06:52.451 { 00:06:52.451 "dma_device_id": "system", 00:06:52.451 "dma_device_type": 1 00:06:52.451 }, 00:06:52.451 { 00:06:52.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.451 "dma_device_type": 2 00:06:52.451 }, 00:06:52.451 { 00:06:52.451 "dma_device_id": "system", 00:06:52.451 "dma_device_type": 1 00:06:52.451 }, 00:06:52.451 { 00:06:52.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.451 "dma_device_type": 2 00:06:52.451 } 00:06:52.451 ], 00:06:52.451 "driver_specific": { 00:06:52.451 "raid": { 00:06:52.451 "uuid": "12b27144-e922-4449-b7c9-1bdd1520ffa0", 00:06:52.451 "strip_size_kb": 64, 00:06:52.451 "state": "online", 00:06:52.451 "raid_level": "concat", 00:06:52.451 "superblock": false, 00:06:52.451 "num_base_bdevs": 2, 00:06:52.451 "num_base_bdevs_discovered": 2, 00:06:52.451 "num_base_bdevs_operational": 2, 00:06:52.451 "base_bdevs_list": [ 00:06:52.451 { 00:06:52.451 "name": "BaseBdev1", 00:06:52.451 "uuid": "a4e0b1bb-be5b-422c-9515-f7b0c762ec75", 00:06:52.451 "is_configured": true, 00:06:52.451 "data_offset": 0, 00:06:52.451 "data_size": 65536 00:06:52.451 }, 00:06:52.451 { 00:06:52.451 "name": "BaseBdev2", 00:06:52.451 "uuid": "67724b72-6c51-455c-bb64-9b1a14dba3c0", 00:06:52.451 "is_configured": true, 00:06:52.451 "data_offset": 0, 00:06:52.451 "data_size": 65536 00:06:52.451 } 00:06:52.451 ] 00:06:52.451 } 00:06:52.451 } 00:06:52.451 }' 00:06:52.451 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:52.451 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:52.451 BaseBdev2' 00:06:52.451 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.710 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:52.710 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.711 [2024-11-02 23:46:46.682861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:52.711 [2024-11-02 23:46:46.682934] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.711 [2024-11-02 23:46:46.683023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.711 "name": "Existed_Raid", 00:06:52.711 "uuid": "12b27144-e922-4449-b7c9-1bdd1520ffa0", 00:06:52.711 "strip_size_kb": 64, 00:06:52.711 "state": "offline", 00:06:52.711 "raid_level": "concat", 00:06:52.711 "superblock": false, 00:06:52.711 "num_base_bdevs": 2, 00:06:52.711 "num_base_bdevs_discovered": 1, 00:06:52.711 "num_base_bdevs_operational": 1, 00:06:52.711 "base_bdevs_list": [ 00:06:52.711 { 00:06:52.711 "name": null, 00:06:52.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.711 "is_configured": false, 00:06:52.711 "data_offset": 0, 00:06:52.711 "data_size": 65536 00:06:52.711 }, 00:06:52.711 { 00:06:52.711 "name": "BaseBdev2", 00:06:52.711 "uuid": "67724b72-6c51-455c-bb64-9b1a14dba3c0", 00:06:52.711 "is_configured": true, 00:06:52.711 "data_offset": 0, 00:06:52.711 "data_size": 65536 00:06:52.711 } 00:06:52.711 ] 00:06:52.711 }' 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.711 23:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 [2024-11-02 23:46:47.133429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:53.281 [2024-11-02 23:46:47.133541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72854 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 72854 ']' 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 72854 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72854 00:06:53.281 killing process with pid 72854 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72854' 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 72854 00:06:53.281 [2024-11-02 23:46:47.226350] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.281 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 72854 00:06:53.281 [2024-11-02 23:46:47.227338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.540 23:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:53.540 00:06:53.540 real 0m3.775s 00:06:53.540 user 0m5.972s 00:06:53.540 sys 0m0.755s 00:06:53.540 ************************************ 00:06:53.540 END TEST raid_state_function_test 00:06:53.540 ************************************ 00:06:53.540 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.540 23:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.540 23:46:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:06:53.540 23:46:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:53.540 23:46:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.540 23:46:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.541 ************************************ 00:06:53.541 START TEST raid_state_function_test_sb 00:06:53.541 ************************************ 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73096 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73096' 00:06:53.541 Process raid pid: 73096 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73096 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73096 ']' 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.541 23:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.541 [2024-11-02 23:46:47.591748] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:53.541 [2024-11-02 23:46:47.591875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.801 [2024-11-02 23:46:47.725427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.801 [2024-11-02 23:46:47.751915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.801 [2024-11-02 23:46:47.794529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.801 [2024-11-02 23:46:47.794563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.369 [2024-11-02 23:46:48.424027] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.369 [2024-11-02 23:46:48.424117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.369 [2024-11-02 23:46:48.424147] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.369 [2024-11-02 23:46:48.424169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.369 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.370 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.370 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.370 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.370 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.370 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.629 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.629 "name": "Existed_Raid", 00:06:54.629 "uuid": "6fdc51f4-9684-4f92-a3d6-003b6a49db2c", 00:06:54.629 "strip_size_kb": 64, 00:06:54.629 "state": "configuring", 00:06:54.629 "raid_level": "concat", 00:06:54.629 "superblock": true, 00:06:54.629 "num_base_bdevs": 2, 00:06:54.629 "num_base_bdevs_discovered": 0, 00:06:54.629 "num_base_bdevs_operational": 2, 00:06:54.629 "base_bdevs_list": [ 00:06:54.629 { 00:06:54.629 "name": "BaseBdev1", 00:06:54.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.629 "is_configured": false, 00:06:54.629 "data_offset": 0, 00:06:54.629 "data_size": 0 00:06:54.629 }, 00:06:54.629 { 00:06:54.629 "name": "BaseBdev2", 00:06:54.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.629 "is_configured": false, 00:06:54.629 "data_offset": 0, 00:06:54.629 "data_size": 0 00:06:54.629 } 00:06:54.629 ] 00:06:54.629 }' 00:06:54.629 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.629 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.887 [2024-11-02 23:46:48.899133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:54.887 [2024-11-02 23:46:48.899222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.887 [2024-11-02 23:46:48.911109] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.887 [2024-11-02 23:46:48.911189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.887 [2024-11-02 23:46:48.911222] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.887 [2024-11-02 23:46:48.911266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.887 BaseBdev1 00:06:54.887 [2024-11-02 23:46:48.931830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:54.887 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.888 [ 00:06:54.888 { 00:06:54.888 "name": "BaseBdev1", 00:06:54.888 "aliases": [ 00:06:54.888 "d221d8e2-4918-469d-9f5c-61549be64b01" 00:06:54.888 ], 00:06:54.888 "product_name": "Malloc disk", 00:06:54.888 "block_size": 512, 00:06:54.888 "num_blocks": 65536, 00:06:54.888 "uuid": "d221d8e2-4918-469d-9f5c-61549be64b01", 00:06:54.888 "assigned_rate_limits": { 00:06:54.888 "rw_ios_per_sec": 0, 00:06:54.888 "rw_mbytes_per_sec": 0, 00:06:54.888 "r_mbytes_per_sec": 0, 00:06:54.888 "w_mbytes_per_sec": 0 00:06:54.888 }, 00:06:54.888 "claimed": true, 00:06:54.888 "claim_type": "exclusive_write", 00:06:54.888 "zoned": false, 00:06:54.888 "supported_io_types": { 00:06:54.888 "read": true, 00:06:54.888 "write": true, 00:06:54.888 "unmap": true, 00:06:54.888 "flush": true, 00:06:54.888 "reset": true, 00:06:54.888 "nvme_admin": false, 00:06:54.888 "nvme_io": false, 00:06:54.888 "nvme_io_md": false, 00:06:54.888 "write_zeroes": true, 00:06:54.888 "zcopy": true, 00:06:54.888 "get_zone_info": false, 00:06:54.888 "zone_management": false, 00:06:54.888 "zone_append": false, 00:06:54.888 "compare": false, 00:06:54.888 "compare_and_write": false, 00:06:54.888 "abort": true, 00:06:54.888 "seek_hole": false, 00:06:54.888 "seek_data": false, 00:06:54.888 "copy": true, 00:06:54.888 "nvme_iov_md": false 00:06:54.888 }, 00:06:54.888 "memory_domains": [ 00:06:54.888 { 00:06:54.888 "dma_device_id": "system", 00:06:54.888 "dma_device_type": 1 00:06:54.888 }, 00:06:54.888 { 00:06:54.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.888 "dma_device_type": 2 00:06:54.888 } 00:06:54.888 ], 00:06:54.888 "driver_specific": {} 00:06:54.888 } 00:06:54.888 ] 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.888 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.146 23:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.146 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.146 "name": "Existed_Raid", 00:06:55.146 "uuid": "82fcb09f-12dc-4d1d-b4e4-b112d59670d2", 00:06:55.146 "strip_size_kb": 64, 00:06:55.146 "state": "configuring", 00:06:55.146 "raid_level": "concat", 00:06:55.146 "superblock": true, 00:06:55.146 "num_base_bdevs": 2, 00:06:55.146 "num_base_bdevs_discovered": 1, 00:06:55.146 "num_base_bdevs_operational": 2, 00:06:55.146 "base_bdevs_list": [ 00:06:55.146 { 00:06:55.146 "name": "BaseBdev1", 00:06:55.146 "uuid": "d221d8e2-4918-469d-9f5c-61549be64b01", 00:06:55.146 "is_configured": true, 00:06:55.146 "data_offset": 2048, 00:06:55.146 "data_size": 63488 00:06:55.146 }, 00:06:55.146 { 00:06:55.146 "name": "BaseBdev2", 00:06:55.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.146 "is_configured": false, 00:06:55.146 "data_offset": 0, 00:06:55.146 "data_size": 0 00:06:55.146 } 00:06:55.146 ] 00:06:55.146 }' 00:06:55.146 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.146 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.405 [2024-11-02 23:46:49.323193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.405 [2024-11-02 23:46:49.323294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.405 [2024-11-02 23:46:49.335196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:55.405 [2024-11-02 23:46:49.337088] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.405 [2024-11-02 23:46:49.337176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.405 "name": "Existed_Raid", 00:06:55.405 "uuid": "a913cae6-82b1-4778-b4cc-41f71947247c", 00:06:55.405 "strip_size_kb": 64, 00:06:55.405 "state": "configuring", 00:06:55.405 "raid_level": "concat", 00:06:55.405 "superblock": true, 00:06:55.405 "num_base_bdevs": 2, 00:06:55.405 "num_base_bdevs_discovered": 1, 00:06:55.405 "num_base_bdevs_operational": 2, 00:06:55.405 "base_bdevs_list": [ 00:06:55.405 { 00:06:55.405 "name": "BaseBdev1", 00:06:55.405 "uuid": "d221d8e2-4918-469d-9f5c-61549be64b01", 00:06:55.405 "is_configured": true, 00:06:55.405 "data_offset": 2048, 00:06:55.405 "data_size": 63488 00:06:55.405 }, 00:06:55.405 { 00:06:55.405 "name": "BaseBdev2", 00:06:55.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.405 "is_configured": false, 00:06:55.405 "data_offset": 0, 00:06:55.405 "data_size": 0 00:06:55.405 } 00:06:55.405 ] 00:06:55.405 }' 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.405 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.672 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:55.672 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.672 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 [2024-11-02 23:46:49.769280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:55.938 [2024-11-02 23:46:49.769559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:55.938 [2024-11-02 23:46:49.769614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:55.938 [2024-11-02 23:46:49.769933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:55.938 BaseBdev2 00:06:55.938 [2024-11-02 23:46:49.770139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:55.938 [2024-11-02 23:46:49.770186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:55.938 [2024-11-02 23:46:49.770344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 [ 00:06:55.938 { 00:06:55.938 "name": "BaseBdev2", 00:06:55.938 "aliases": [ 00:06:55.938 "0ceb134f-b803-4655-945f-cd65f705b30d" 00:06:55.938 ], 00:06:55.938 "product_name": "Malloc disk", 00:06:55.938 "block_size": 512, 00:06:55.938 "num_blocks": 65536, 00:06:55.938 "uuid": "0ceb134f-b803-4655-945f-cd65f705b30d", 00:06:55.938 "assigned_rate_limits": { 00:06:55.938 "rw_ios_per_sec": 0, 00:06:55.938 "rw_mbytes_per_sec": 0, 00:06:55.938 "r_mbytes_per_sec": 0, 00:06:55.938 "w_mbytes_per_sec": 0 00:06:55.938 }, 00:06:55.938 "claimed": true, 00:06:55.938 "claim_type": "exclusive_write", 00:06:55.938 "zoned": false, 00:06:55.938 "supported_io_types": { 00:06:55.938 "read": true, 00:06:55.938 "write": true, 00:06:55.938 "unmap": true, 00:06:55.938 "flush": true, 00:06:55.938 "reset": true, 00:06:55.938 "nvme_admin": false, 00:06:55.938 "nvme_io": false, 00:06:55.938 "nvme_io_md": false, 00:06:55.938 "write_zeroes": true, 00:06:55.938 "zcopy": true, 00:06:55.938 "get_zone_info": false, 00:06:55.938 "zone_management": false, 00:06:55.938 "zone_append": false, 00:06:55.938 "compare": false, 00:06:55.938 "compare_and_write": false, 00:06:55.938 "abort": true, 00:06:55.938 "seek_hole": false, 00:06:55.938 "seek_data": false, 00:06:55.938 "copy": true, 00:06:55.938 "nvme_iov_md": false 00:06:55.938 }, 00:06:55.938 "memory_domains": [ 00:06:55.938 { 00:06:55.938 "dma_device_id": "system", 00:06:55.938 "dma_device_type": 1 00:06:55.938 }, 00:06:55.938 { 00:06:55.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.938 "dma_device_type": 2 00:06:55.938 } 00:06:55.938 ], 00:06:55.938 "driver_specific": {} 00:06:55.938 } 00:06:55.938 ] 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.938 "name": "Existed_Raid", 00:06:55.938 "uuid": "a913cae6-82b1-4778-b4cc-41f71947247c", 00:06:55.938 "strip_size_kb": 64, 00:06:55.938 "state": "online", 00:06:55.938 "raid_level": "concat", 00:06:55.938 "superblock": true, 00:06:55.938 "num_base_bdevs": 2, 00:06:55.938 "num_base_bdevs_discovered": 2, 00:06:55.938 "num_base_bdevs_operational": 2, 00:06:55.938 "base_bdevs_list": [ 00:06:55.938 { 00:06:55.938 "name": "BaseBdev1", 00:06:55.938 "uuid": "d221d8e2-4918-469d-9f5c-61549be64b01", 00:06:55.938 "is_configured": true, 00:06:55.938 "data_offset": 2048, 00:06:55.938 "data_size": 63488 00:06:55.938 }, 00:06:55.938 { 00:06:55.938 "name": "BaseBdev2", 00:06:55.938 "uuid": "0ceb134f-b803-4655-945f-cd65f705b30d", 00:06:55.938 "is_configured": true, 00:06:55.938 "data_offset": 2048, 00:06:55.938 "data_size": 63488 00:06:55.938 } 00:06:55.938 ] 00:06:55.938 }' 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.938 23:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.197 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:56.197 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:56.197 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:56.197 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:56.197 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:56.197 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:56.197 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:56.197 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.197 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.197 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:56.197 [2024-11-02 23:46:50.228818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.198 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.198 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:56.198 "name": "Existed_Raid", 00:06:56.198 "aliases": [ 00:06:56.198 "a913cae6-82b1-4778-b4cc-41f71947247c" 00:06:56.198 ], 00:06:56.198 "product_name": "Raid Volume", 00:06:56.198 "block_size": 512, 00:06:56.198 "num_blocks": 126976, 00:06:56.198 "uuid": "a913cae6-82b1-4778-b4cc-41f71947247c", 00:06:56.198 "assigned_rate_limits": { 00:06:56.198 "rw_ios_per_sec": 0, 00:06:56.198 "rw_mbytes_per_sec": 0, 00:06:56.198 "r_mbytes_per_sec": 0, 00:06:56.198 "w_mbytes_per_sec": 0 00:06:56.198 }, 00:06:56.198 "claimed": false, 00:06:56.198 "zoned": false, 00:06:56.198 "supported_io_types": { 00:06:56.198 "read": true, 00:06:56.198 "write": true, 00:06:56.198 "unmap": true, 00:06:56.198 "flush": true, 00:06:56.198 "reset": true, 00:06:56.198 "nvme_admin": false, 00:06:56.198 "nvme_io": false, 00:06:56.198 "nvme_io_md": false, 00:06:56.198 "write_zeroes": true, 00:06:56.198 "zcopy": false, 00:06:56.198 "get_zone_info": false, 00:06:56.198 "zone_management": false, 00:06:56.198 "zone_append": false, 00:06:56.198 "compare": false, 00:06:56.198 "compare_and_write": false, 00:06:56.198 "abort": false, 00:06:56.198 "seek_hole": false, 00:06:56.198 "seek_data": false, 00:06:56.198 "copy": false, 00:06:56.198 "nvme_iov_md": false 00:06:56.198 }, 00:06:56.198 "memory_domains": [ 00:06:56.198 { 00:06:56.198 "dma_device_id": "system", 00:06:56.198 "dma_device_type": 1 00:06:56.198 }, 00:06:56.198 { 00:06:56.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.198 "dma_device_type": 2 00:06:56.198 }, 00:06:56.198 { 00:06:56.198 "dma_device_id": "system", 00:06:56.198 "dma_device_type": 1 00:06:56.198 }, 00:06:56.198 { 00:06:56.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.198 "dma_device_type": 2 00:06:56.198 } 00:06:56.198 ], 00:06:56.198 "driver_specific": { 00:06:56.198 "raid": { 00:06:56.198 "uuid": "a913cae6-82b1-4778-b4cc-41f71947247c", 00:06:56.198 "strip_size_kb": 64, 00:06:56.198 "state": "online", 00:06:56.198 "raid_level": "concat", 00:06:56.198 "superblock": true, 00:06:56.198 "num_base_bdevs": 2, 00:06:56.198 "num_base_bdevs_discovered": 2, 00:06:56.198 "num_base_bdevs_operational": 2, 00:06:56.198 "base_bdevs_list": [ 00:06:56.198 { 00:06:56.198 "name": "BaseBdev1", 00:06:56.198 "uuid": "d221d8e2-4918-469d-9f5c-61549be64b01", 00:06:56.198 "is_configured": true, 00:06:56.198 "data_offset": 2048, 00:06:56.198 "data_size": 63488 00:06:56.198 }, 00:06:56.198 { 00:06:56.198 "name": "BaseBdev2", 00:06:56.198 "uuid": "0ceb134f-b803-4655-945f-cd65f705b30d", 00:06:56.198 "is_configured": true, 00:06:56.198 "data_offset": 2048, 00:06:56.198 "data_size": 63488 00:06:56.198 } 00:06:56.198 ] 00:06:56.198 } 00:06:56.198 } 00:06:56.198 }' 00:06:56.198 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:56.198 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:56.198 BaseBdev2' 00:06:56.198 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.457 [2024-11-02 23:46:50.416256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:56.457 [2024-11-02 23:46:50.416334] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:56.457 [2024-11-02 23:46:50.416416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.457 "name": "Existed_Raid", 00:06:56.457 "uuid": "a913cae6-82b1-4778-b4cc-41f71947247c", 00:06:56.457 "strip_size_kb": 64, 00:06:56.457 "state": "offline", 00:06:56.457 "raid_level": "concat", 00:06:56.457 "superblock": true, 00:06:56.457 "num_base_bdevs": 2, 00:06:56.457 "num_base_bdevs_discovered": 1, 00:06:56.457 "num_base_bdevs_operational": 1, 00:06:56.457 "base_bdevs_list": [ 00:06:56.457 { 00:06:56.457 "name": null, 00:06:56.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.457 "is_configured": false, 00:06:56.457 "data_offset": 0, 00:06:56.457 "data_size": 63488 00:06:56.457 }, 00:06:56.457 { 00:06:56.457 "name": "BaseBdev2", 00:06:56.457 "uuid": "0ceb134f-b803-4655-945f-cd65f705b30d", 00:06:56.457 "is_configured": true, 00:06:56.457 "data_offset": 2048, 00:06:56.457 "data_size": 63488 00:06:56.457 } 00:06:56.457 ] 00:06:56.457 }' 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.457 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.027 [2024-11-02 23:46:50.930768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:57.027 [2024-11-02 23:46:50.930863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73096 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73096 ']' 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 73096 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:06:57.027 23:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.027 23:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73096 00:06:57.027 23:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:57.027 23:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:57.027 23:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73096' 00:06:57.027 killing process with pid 73096 00:06:57.027 23:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 73096 00:06:57.027 [2024-11-02 23:46:51.021674] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.027 23:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 73096 00:06:57.027 [2024-11-02 23:46:51.022652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.297 23:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:57.297 00:06:57.297 real 0m3.733s 00:06:57.297 user 0m5.851s 00:06:57.297 sys 0m0.718s 00:06:57.297 23:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.297 23:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.297 ************************************ 00:06:57.297 END TEST raid_state_function_test_sb 00:06:57.297 ************************************ 00:06:57.297 23:46:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:06:57.297 23:46:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:57.297 23:46:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.297 23:46:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.297 ************************************ 00:06:57.297 START TEST raid_superblock_test 00:06:57.297 ************************************ 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73326 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73326 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 73326 ']' 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.297 23:46:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.557 [2024-11-02 23:46:51.390367] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:06:57.557 [2024-11-02 23:46:51.390500] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73326 ] 00:06:57.557 [2024-11-02 23:46:51.525633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.557 [2024-11-02 23:46:51.551837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.557 [2024-11-02 23:46:51.593734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.557 [2024-11-02 23:46:51.593808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:58.498 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.499 malloc1 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.499 [2024-11-02 23:46:52.255374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:58.499 [2024-11-02 23:46:52.255502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.499 [2024-11-02 23:46:52.255555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:58.499 [2024-11-02 23:46:52.255609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.499 [2024-11-02 23:46:52.257704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.499 [2024-11-02 23:46:52.257789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:58.499 pt1 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.499 malloc2 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.499 [2024-11-02 23:46:52.283767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:58.499 [2024-11-02 23:46:52.283853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.499 [2024-11-02 23:46:52.283885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:58.499 [2024-11-02 23:46:52.283913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.499 [2024-11-02 23:46:52.285931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.499 [2024-11-02 23:46:52.285999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:58.499 pt2 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.499 [2024-11-02 23:46:52.295786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:58.499 [2024-11-02 23:46:52.297712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:58.499 [2024-11-02 23:46:52.297900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:58.499 [2024-11-02 23:46:52.297951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:58.499 [2024-11-02 23:46:52.298255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:58.499 [2024-11-02 23:46:52.298427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:58.499 [2024-11-02 23:46:52.298471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:06:58.499 [2024-11-02 23:46:52.298637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.499 "name": "raid_bdev1", 00:06:58.499 "uuid": "f6f3064b-128e-4440-8f3d-c16a147bff3c", 00:06:58.499 "strip_size_kb": 64, 00:06:58.499 "state": "online", 00:06:58.499 "raid_level": "concat", 00:06:58.499 "superblock": true, 00:06:58.499 "num_base_bdevs": 2, 00:06:58.499 "num_base_bdevs_discovered": 2, 00:06:58.499 "num_base_bdevs_operational": 2, 00:06:58.499 "base_bdevs_list": [ 00:06:58.499 { 00:06:58.499 "name": "pt1", 00:06:58.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:58.499 "is_configured": true, 00:06:58.499 "data_offset": 2048, 00:06:58.499 "data_size": 63488 00:06:58.499 }, 00:06:58.499 { 00:06:58.499 "name": "pt2", 00:06:58.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:58.499 "is_configured": true, 00:06:58.499 "data_offset": 2048, 00:06:58.499 "data_size": 63488 00:06:58.499 } 00:06:58.499 ] 00:06:58.499 }' 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.499 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.759 [2024-11-02 23:46:52.723365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.759 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:58.759 "name": "raid_bdev1", 00:06:58.759 "aliases": [ 00:06:58.759 "f6f3064b-128e-4440-8f3d-c16a147bff3c" 00:06:58.759 ], 00:06:58.759 "product_name": "Raid Volume", 00:06:58.759 "block_size": 512, 00:06:58.759 "num_blocks": 126976, 00:06:58.759 "uuid": "f6f3064b-128e-4440-8f3d-c16a147bff3c", 00:06:58.759 "assigned_rate_limits": { 00:06:58.759 "rw_ios_per_sec": 0, 00:06:58.759 "rw_mbytes_per_sec": 0, 00:06:58.759 "r_mbytes_per_sec": 0, 00:06:58.759 "w_mbytes_per_sec": 0 00:06:58.759 }, 00:06:58.759 "claimed": false, 00:06:58.759 "zoned": false, 00:06:58.759 "supported_io_types": { 00:06:58.759 "read": true, 00:06:58.759 "write": true, 00:06:58.759 "unmap": true, 00:06:58.759 "flush": true, 00:06:58.759 "reset": true, 00:06:58.759 "nvme_admin": false, 00:06:58.759 "nvme_io": false, 00:06:58.759 "nvme_io_md": false, 00:06:58.759 "write_zeroes": true, 00:06:58.759 "zcopy": false, 00:06:58.759 "get_zone_info": false, 00:06:58.759 "zone_management": false, 00:06:58.759 "zone_append": false, 00:06:58.759 "compare": false, 00:06:58.759 "compare_and_write": false, 00:06:58.759 "abort": false, 00:06:58.759 "seek_hole": false, 00:06:58.759 "seek_data": false, 00:06:58.759 "copy": false, 00:06:58.759 "nvme_iov_md": false 00:06:58.759 }, 00:06:58.759 "memory_domains": [ 00:06:58.759 { 00:06:58.759 "dma_device_id": "system", 00:06:58.759 "dma_device_type": 1 00:06:58.759 }, 00:06:58.759 { 00:06:58.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.759 "dma_device_type": 2 00:06:58.759 }, 00:06:58.759 { 00:06:58.759 "dma_device_id": "system", 00:06:58.759 "dma_device_type": 1 00:06:58.759 }, 00:06:58.759 { 00:06:58.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.759 "dma_device_type": 2 00:06:58.759 } 00:06:58.759 ], 00:06:58.759 "driver_specific": { 00:06:58.759 "raid": { 00:06:58.759 "uuid": "f6f3064b-128e-4440-8f3d-c16a147bff3c", 00:06:58.759 "strip_size_kb": 64, 00:06:58.759 "state": "online", 00:06:58.759 "raid_level": "concat", 00:06:58.759 "superblock": true, 00:06:58.759 "num_base_bdevs": 2, 00:06:58.759 "num_base_bdevs_discovered": 2, 00:06:58.759 "num_base_bdevs_operational": 2, 00:06:58.759 "base_bdevs_list": [ 00:06:58.759 { 00:06:58.759 "name": "pt1", 00:06:58.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:58.759 "is_configured": true, 00:06:58.759 "data_offset": 2048, 00:06:58.759 "data_size": 63488 00:06:58.759 }, 00:06:58.759 { 00:06:58.759 "name": "pt2", 00:06:58.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:58.759 "is_configured": true, 00:06:58.759 "data_offset": 2048, 00:06:58.759 "data_size": 63488 00:06:58.759 } 00:06:58.759 ] 00:06:58.759 } 00:06:58.759 } 00:06:58.760 }' 00:06:58.760 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:58.760 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:58.760 pt2' 00:06:58.760 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.760 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:58.760 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.760 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:58.760 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.760 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.760 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.020 [2024-11-02 23:46:52.946840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f6f3064b-128e-4440-8f3d-c16a147bff3c 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f6f3064b-128e-4440-8f3d-c16a147bff3c ']' 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.020 [2024-11-02 23:46:52.978526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:59.020 [2024-11-02 23:46:52.978593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:59.020 [2024-11-02 23:46:52.978701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.020 [2024-11-02 23:46:52.978798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.020 [2024-11-02 23:46:52.978848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:59.020 23:46:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:59.020 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.280 [2024-11-02 23:46:53.122298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:59.280 [2024-11-02 23:46:53.124210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:59.280 [2024-11-02 23:46:53.124309] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:59.280 [2024-11-02 23:46:53.124392] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:59.280 [2024-11-02 23:46:53.124474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:59.280 [2024-11-02 23:46:53.124503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:06:59.280 request: 00:06:59.280 { 00:06:59.280 "name": "raid_bdev1", 00:06:59.280 "raid_level": "concat", 00:06:59.280 "base_bdevs": [ 00:06:59.280 "malloc1", 00:06:59.280 "malloc2" 00:06:59.280 ], 00:06:59.280 "strip_size_kb": 64, 00:06:59.280 "superblock": false, 00:06:59.280 "method": "bdev_raid_create", 00:06:59.280 "req_id": 1 00:06:59.280 } 00:06:59.280 Got JSON-RPC error response 00:06:59.280 response: 00:06:59.280 { 00:06:59.280 "code": -17, 00:06:59.280 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:59.280 } 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.280 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.281 [2024-11-02 23:46:53.186186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:59.281 [2024-11-02 23:46:53.186267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.281 [2024-11-02 23:46:53.186300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:59.281 [2024-11-02 23:46:53.186326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.281 [2024-11-02 23:46:53.188485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.281 [2024-11-02 23:46:53.188566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:59.281 [2024-11-02 23:46:53.188652] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:59.281 [2024-11-02 23:46:53.188718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:59.281 pt1 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.281 "name": "raid_bdev1", 00:06:59.281 "uuid": "f6f3064b-128e-4440-8f3d-c16a147bff3c", 00:06:59.281 "strip_size_kb": 64, 00:06:59.281 "state": "configuring", 00:06:59.281 "raid_level": "concat", 00:06:59.281 "superblock": true, 00:06:59.281 "num_base_bdevs": 2, 00:06:59.281 "num_base_bdevs_discovered": 1, 00:06:59.281 "num_base_bdevs_operational": 2, 00:06:59.281 "base_bdevs_list": [ 00:06:59.281 { 00:06:59.281 "name": "pt1", 00:06:59.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:59.281 "is_configured": true, 00:06:59.281 "data_offset": 2048, 00:06:59.281 "data_size": 63488 00:06:59.281 }, 00:06:59.281 { 00:06:59.281 "name": null, 00:06:59.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:59.281 "is_configured": false, 00:06:59.281 "data_offset": 2048, 00:06:59.281 "data_size": 63488 00:06:59.281 } 00:06:59.281 ] 00:06:59.281 }' 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.281 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.541 [2024-11-02 23:46:53.597441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:59.541 [2024-11-02 23:46:53.597499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.541 [2024-11-02 23:46:53.597517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:59.541 [2024-11-02 23:46:53.597526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.541 [2024-11-02 23:46:53.597913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.541 [2024-11-02 23:46:53.597936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:59.541 [2024-11-02 23:46:53.597999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:59.541 [2024-11-02 23:46:53.598018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:59.541 [2024-11-02 23:46:53.598103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:59.541 [2024-11-02 23:46:53.598111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:59.541 [2024-11-02 23:46:53.598339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:06:59.541 [2024-11-02 23:46:53.598455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:59.541 [2024-11-02 23:46:53.598470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:59.541 [2024-11-02 23:46:53.598565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.541 pt2 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.541 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.801 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.801 "name": "raid_bdev1", 00:06:59.801 "uuid": "f6f3064b-128e-4440-8f3d-c16a147bff3c", 00:06:59.801 "strip_size_kb": 64, 00:06:59.801 "state": "online", 00:06:59.801 "raid_level": "concat", 00:06:59.801 "superblock": true, 00:06:59.801 "num_base_bdevs": 2, 00:06:59.801 "num_base_bdevs_discovered": 2, 00:06:59.801 "num_base_bdevs_operational": 2, 00:06:59.801 "base_bdevs_list": [ 00:06:59.801 { 00:06:59.801 "name": "pt1", 00:06:59.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:59.801 "is_configured": true, 00:06:59.801 "data_offset": 2048, 00:06:59.801 "data_size": 63488 00:06:59.801 }, 00:06:59.801 { 00:06:59.801 "name": "pt2", 00:06:59.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:59.801 "is_configured": true, 00:06:59.801 "data_offset": 2048, 00:06:59.801 "data_size": 63488 00:06:59.801 } 00:06:59.801 ] 00:06:59.801 }' 00:06:59.801 23:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.801 23:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.060 [2024-11-02 23:46:54.064903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.060 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:00.060 "name": "raid_bdev1", 00:07:00.060 "aliases": [ 00:07:00.060 "f6f3064b-128e-4440-8f3d-c16a147bff3c" 00:07:00.060 ], 00:07:00.060 "product_name": "Raid Volume", 00:07:00.060 "block_size": 512, 00:07:00.060 "num_blocks": 126976, 00:07:00.060 "uuid": "f6f3064b-128e-4440-8f3d-c16a147bff3c", 00:07:00.060 "assigned_rate_limits": { 00:07:00.060 "rw_ios_per_sec": 0, 00:07:00.060 "rw_mbytes_per_sec": 0, 00:07:00.060 "r_mbytes_per_sec": 0, 00:07:00.060 "w_mbytes_per_sec": 0 00:07:00.060 }, 00:07:00.060 "claimed": false, 00:07:00.060 "zoned": false, 00:07:00.060 "supported_io_types": { 00:07:00.060 "read": true, 00:07:00.060 "write": true, 00:07:00.060 "unmap": true, 00:07:00.060 "flush": true, 00:07:00.060 "reset": true, 00:07:00.060 "nvme_admin": false, 00:07:00.060 "nvme_io": false, 00:07:00.060 "nvme_io_md": false, 00:07:00.060 "write_zeroes": true, 00:07:00.061 "zcopy": false, 00:07:00.061 "get_zone_info": false, 00:07:00.061 "zone_management": false, 00:07:00.061 "zone_append": false, 00:07:00.061 "compare": false, 00:07:00.061 "compare_and_write": false, 00:07:00.061 "abort": false, 00:07:00.061 "seek_hole": false, 00:07:00.061 "seek_data": false, 00:07:00.061 "copy": false, 00:07:00.061 "nvme_iov_md": false 00:07:00.061 }, 00:07:00.061 "memory_domains": [ 00:07:00.061 { 00:07:00.061 "dma_device_id": "system", 00:07:00.061 "dma_device_type": 1 00:07:00.061 }, 00:07:00.061 { 00:07:00.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.061 "dma_device_type": 2 00:07:00.061 }, 00:07:00.061 { 00:07:00.061 "dma_device_id": "system", 00:07:00.061 "dma_device_type": 1 00:07:00.061 }, 00:07:00.061 { 00:07:00.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.061 "dma_device_type": 2 00:07:00.061 } 00:07:00.061 ], 00:07:00.061 "driver_specific": { 00:07:00.061 "raid": { 00:07:00.061 "uuid": "f6f3064b-128e-4440-8f3d-c16a147bff3c", 00:07:00.061 "strip_size_kb": 64, 00:07:00.061 "state": "online", 00:07:00.061 "raid_level": "concat", 00:07:00.061 "superblock": true, 00:07:00.061 "num_base_bdevs": 2, 00:07:00.061 "num_base_bdevs_discovered": 2, 00:07:00.061 "num_base_bdevs_operational": 2, 00:07:00.061 "base_bdevs_list": [ 00:07:00.061 { 00:07:00.061 "name": "pt1", 00:07:00.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.061 "is_configured": true, 00:07:00.061 "data_offset": 2048, 00:07:00.061 "data_size": 63488 00:07:00.061 }, 00:07:00.061 { 00:07:00.061 "name": "pt2", 00:07:00.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.061 "is_configured": true, 00:07:00.061 "data_offset": 2048, 00:07:00.061 "data_size": 63488 00:07:00.061 } 00:07:00.061 ] 00:07:00.061 } 00:07:00.061 } 00:07:00.061 }' 00:07:00.061 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.061 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:00.061 pt2' 00:07:00.061 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.321 [2024-11-02 23:46:54.296487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f6f3064b-128e-4440-8f3d-c16a147bff3c '!=' f6f3064b-128e-4440-8f3d-c16a147bff3c ']' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73326 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 73326 ']' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 73326 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73326 00:07:00.321 killing process with pid 73326 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73326' 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 73326 00:07:00.321 [2024-11-02 23:46:54.366367] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.321 [2024-11-02 23:46:54.366486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.321 [2024-11-02 23:46:54.366537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.321 [2024-11-02 23:46:54.366546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:00.321 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 73326 00:07:00.321 [2024-11-02 23:46:54.389094] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.591 23:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:00.591 ************************************ 00:07:00.591 END TEST raid_superblock_test 00:07:00.591 ************************************ 00:07:00.591 00:07:00.591 real 0m3.291s 00:07:00.591 user 0m5.114s 00:07:00.591 sys 0m0.711s 00:07:00.591 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.591 23:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.591 23:46:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:00.591 23:46:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:00.591 23:46:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.591 23:46:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.591 ************************************ 00:07:00.591 START TEST raid_read_error_test 00:07:00.591 ************************************ 00:07:00.591 23:46:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:07:00.591 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:00.591 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:00.592 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GXqrwSRdZl 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73527 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73527 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73527 ']' 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.853 23:46:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.853 [2024-11-02 23:46:54.792383] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:00.853 [2024-11-02 23:46:54.792545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73527 ] 00:07:01.113 [2024-11-02 23:46:54.954255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.113 [2024-11-02 23:46:54.980115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.113 [2024-11-02 23:46:55.021744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.113 [2024-11-02 23:46:55.021792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.691 BaseBdev1_malloc 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.691 true 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.691 [2024-11-02 23:46:55.667144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:01.691 [2024-11-02 23:46:55.667237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.691 [2024-11-02 23:46:55.667286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:01.691 [2024-11-02 23:46:55.667322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.691 [2024-11-02 23:46:55.669451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.691 [2024-11-02 23:46:55.669523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:01.691 BaseBdev1 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.691 BaseBdev2_malloc 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.691 true 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.691 [2024-11-02 23:46:55.707536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:01.691 [2024-11-02 23:46:55.707623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.691 [2024-11-02 23:46:55.707661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:01.691 [2024-11-02 23:46:55.707699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.691 [2024-11-02 23:46:55.709726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.691 [2024-11-02 23:46:55.709810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:01.691 BaseBdev2 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.691 [2024-11-02 23:46:55.719591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:01.691 [2024-11-02 23:46:55.721456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:01.691 [2024-11-02 23:46:55.721677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:01.691 [2024-11-02 23:46:55.721720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:01.691 [2024-11-02 23:46:55.721998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:01.691 [2024-11-02 23:46:55.722166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:01.691 [2024-11-02 23:46:55.722215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:01.691 [2024-11-02 23:46:55.722383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.691 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.692 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.692 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.692 "name": "raid_bdev1", 00:07:01.692 "uuid": "6cbab87a-62af-44a3-bc2a-9979132506b8", 00:07:01.692 "strip_size_kb": 64, 00:07:01.692 "state": "online", 00:07:01.692 "raid_level": "concat", 00:07:01.692 "superblock": true, 00:07:01.692 "num_base_bdevs": 2, 00:07:01.692 "num_base_bdevs_discovered": 2, 00:07:01.692 "num_base_bdevs_operational": 2, 00:07:01.692 "base_bdevs_list": [ 00:07:01.692 { 00:07:01.692 "name": "BaseBdev1", 00:07:01.692 "uuid": "e2d9d008-78b9-5193-b3d2-b4e433694eaa", 00:07:01.692 "is_configured": true, 00:07:01.692 "data_offset": 2048, 00:07:01.692 "data_size": 63488 00:07:01.692 }, 00:07:01.692 { 00:07:01.692 "name": "BaseBdev2", 00:07:01.692 "uuid": "0d227833-d54e-55df-84c4-4af5617a7cf0", 00:07:01.692 "is_configured": true, 00:07:01.692 "data_offset": 2048, 00:07:01.692 "data_size": 63488 00:07:01.692 } 00:07:01.692 ] 00:07:01.692 }' 00:07:01.692 23:46:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.692 23:46:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.261 23:46:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:02.261 23:46:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:02.261 [2024-11-02 23:46:56.263016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.202 "name": "raid_bdev1", 00:07:03.202 "uuid": "6cbab87a-62af-44a3-bc2a-9979132506b8", 00:07:03.202 "strip_size_kb": 64, 00:07:03.202 "state": "online", 00:07:03.202 "raid_level": "concat", 00:07:03.202 "superblock": true, 00:07:03.202 "num_base_bdevs": 2, 00:07:03.202 "num_base_bdevs_discovered": 2, 00:07:03.202 "num_base_bdevs_operational": 2, 00:07:03.202 "base_bdevs_list": [ 00:07:03.202 { 00:07:03.202 "name": "BaseBdev1", 00:07:03.202 "uuid": "e2d9d008-78b9-5193-b3d2-b4e433694eaa", 00:07:03.202 "is_configured": true, 00:07:03.202 "data_offset": 2048, 00:07:03.202 "data_size": 63488 00:07:03.202 }, 00:07:03.202 { 00:07:03.202 "name": "BaseBdev2", 00:07:03.202 "uuid": "0d227833-d54e-55df-84c4-4af5617a7cf0", 00:07:03.202 "is_configured": true, 00:07:03.202 "data_offset": 2048, 00:07:03.202 "data_size": 63488 00:07:03.202 } 00:07:03.202 ] 00:07:03.202 }' 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.202 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.770 [2024-11-02 23:46:57.610497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:03.770 [2024-11-02 23:46:57.610580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.770 [2024-11-02 23:46:57.613086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.770 [2024-11-02 23:46:57.613170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.770 [2024-11-02 23:46:57.613238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.770 [2024-11-02 23:46:57.613294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:03.770 { 00:07:03.770 "results": [ 00:07:03.770 { 00:07:03.770 "job": "raid_bdev1", 00:07:03.770 "core_mask": "0x1", 00:07:03.770 "workload": "randrw", 00:07:03.770 "percentage": 50, 00:07:03.770 "status": "finished", 00:07:03.770 "queue_depth": 1, 00:07:03.770 "io_size": 131072, 00:07:03.770 "runtime": 1.348368, 00:07:03.770 "iops": 17757.76345923368, 00:07:03.770 "mibps": 2219.72043240421, 00:07:03.770 "io_failed": 1, 00:07:03.770 "io_timeout": 0, 00:07:03.770 "avg_latency_us": 77.86424033971592, 00:07:03.770 "min_latency_us": 24.370305676855896, 00:07:03.770 "max_latency_us": 1366.5257641921398 00:07:03.770 } 00:07:03.770 ], 00:07:03.770 "core_count": 1 00:07:03.770 } 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73527 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73527 ']' 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73527 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73527 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73527' 00:07:03.770 killing process with pid 73527 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73527 00:07:03.770 [2024-11-02 23:46:57.664070] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.770 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73527 00:07:03.770 [2024-11-02 23:46:57.679048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.031 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GXqrwSRdZl 00:07:04.031 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:04.031 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:04.031 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:04.031 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:04.031 ************************************ 00:07:04.031 END TEST raid_read_error_test 00:07:04.031 ************************************ 00:07:04.031 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:04.031 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:04.031 23:46:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:04.031 00:07:04.031 real 0m3.215s 00:07:04.031 user 0m4.084s 00:07:04.031 sys 0m0.528s 00:07:04.031 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.031 23:46:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.031 23:46:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:04.031 23:46:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:04.031 23:46:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.031 23:46:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.031 ************************************ 00:07:04.031 START TEST raid_write_error_test 00:07:04.031 ************************************ 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.W6R3PanVTz 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73656 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73656 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73656 ']' 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:04.031 23:46:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.031 [2024-11-02 23:46:58.062085] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:04.031 [2024-11-02 23:46:58.062281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73656 ] 00:07:04.291 [2024-11-02 23:46:58.216358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.292 [2024-11-02 23:46:58.241113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.292 [2024-11-02 23:46:58.282430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.292 [2024-11-02 23:46:58.282547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.860 BaseBdev1_malloc 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.860 true 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.860 [2024-11-02 23:46:58.919718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:04.860 [2024-11-02 23:46:58.919775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.860 [2024-11-02 23:46:58.919805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:04.860 [2024-11-02 23:46:58.919820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.860 [2024-11-02 23:46:58.921855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.860 [2024-11-02 23:46:58.921959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:04.860 BaseBdev1 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.860 BaseBdev2_malloc 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.860 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.860 true 00:07:05.124 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.124 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:05.124 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.124 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.124 [2024-11-02 23:46:58.960112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:05.124 [2024-11-02 23:46:58.960231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.124 [2024-11-02 23:46:58.960254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:05.124 [2024-11-02 23:46:58.960271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.124 [2024-11-02 23:46:58.962435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.124 [2024-11-02 23:46:58.962473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:05.124 BaseBdev2 00:07:05.124 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.124 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:05.124 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.124 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.124 [2024-11-02 23:46:58.972157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.124 [2024-11-02 23:46:58.974044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:05.125 [2024-11-02 23:46:58.974262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:05.125 [2024-11-02 23:46:58.974310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:05.125 [2024-11-02 23:46:58.974598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:05.125 [2024-11-02 23:46:58.974773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:05.125 [2024-11-02 23:46:58.974821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:05.125 [2024-11-02 23:46:58.974999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.125 23:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.125 23:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.125 23:46:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.125 "name": "raid_bdev1", 00:07:05.125 "uuid": "1c70e201-1306-491c-8f0b-ea8a40bffa36", 00:07:05.125 "strip_size_kb": 64, 00:07:05.125 "state": "online", 00:07:05.125 "raid_level": "concat", 00:07:05.125 "superblock": true, 00:07:05.125 "num_base_bdevs": 2, 00:07:05.125 "num_base_bdevs_discovered": 2, 00:07:05.125 "num_base_bdevs_operational": 2, 00:07:05.125 "base_bdevs_list": [ 00:07:05.125 { 00:07:05.125 "name": "BaseBdev1", 00:07:05.125 "uuid": "369b94d4-2a4b-5236-869d-94e3e2bca728", 00:07:05.125 "is_configured": true, 00:07:05.125 "data_offset": 2048, 00:07:05.125 "data_size": 63488 00:07:05.125 }, 00:07:05.125 { 00:07:05.125 "name": "BaseBdev2", 00:07:05.125 "uuid": "04032cd9-a927-509d-b657-c27bf4a32327", 00:07:05.125 "is_configured": true, 00:07:05.125 "data_offset": 2048, 00:07:05.125 "data_size": 63488 00:07:05.125 } 00:07:05.125 ] 00:07:05.125 }' 00:07:05.125 23:46:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.125 23:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.386 23:46:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:05.386 23:46:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:05.645 [2024-11-02 23:46:59.519571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.592 "name": "raid_bdev1", 00:07:06.592 "uuid": "1c70e201-1306-491c-8f0b-ea8a40bffa36", 00:07:06.592 "strip_size_kb": 64, 00:07:06.592 "state": "online", 00:07:06.592 "raid_level": "concat", 00:07:06.592 "superblock": true, 00:07:06.592 "num_base_bdevs": 2, 00:07:06.592 "num_base_bdevs_discovered": 2, 00:07:06.592 "num_base_bdevs_operational": 2, 00:07:06.592 "base_bdevs_list": [ 00:07:06.592 { 00:07:06.592 "name": "BaseBdev1", 00:07:06.592 "uuid": "369b94d4-2a4b-5236-869d-94e3e2bca728", 00:07:06.592 "is_configured": true, 00:07:06.592 "data_offset": 2048, 00:07:06.592 "data_size": 63488 00:07:06.592 }, 00:07:06.592 { 00:07:06.592 "name": "BaseBdev2", 00:07:06.592 "uuid": "04032cd9-a927-509d-b657-c27bf4a32327", 00:07:06.592 "is_configured": true, 00:07:06.592 "data_offset": 2048, 00:07:06.592 "data_size": 63488 00:07:06.592 } 00:07:06.592 ] 00:07:06.592 }' 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.592 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.852 [2024-11-02 23:47:00.867221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:06.852 [2024-11-02 23:47:00.867294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:06.852 [2024-11-02 23:47:00.869706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.852 [2024-11-02 23:47:00.869792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.852 [2024-11-02 23:47:00.869845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.852 [2024-11-02 23:47:00.869857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:06.852 { 00:07:06.852 "results": [ 00:07:06.852 { 00:07:06.852 "job": "raid_bdev1", 00:07:06.852 "core_mask": "0x1", 00:07:06.852 "workload": "randrw", 00:07:06.852 "percentage": 50, 00:07:06.852 "status": "finished", 00:07:06.852 "queue_depth": 1, 00:07:06.852 "io_size": 131072, 00:07:06.852 "runtime": 1.348409, 00:07:06.852 "iops": 17535.48070355508, 00:07:06.852 "mibps": 2191.935087944385, 00:07:06.852 "io_failed": 1, 00:07:06.852 "io_timeout": 0, 00:07:06.852 "avg_latency_us": 78.76454383377525, 00:07:06.852 "min_latency_us": 24.482096069868994, 00:07:06.852 "max_latency_us": 1352.216593886463 00:07:06.852 } 00:07:06.852 ], 00:07:06.852 "core_count": 1 00:07:06.852 } 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73656 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73656 ']' 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73656 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73656 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:06.852 killing process with pid 73656 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73656' 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73656 00:07:06.852 [2024-11-02 23:47:00.906112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.852 23:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73656 00:07:06.852 [2024-11-02 23:47:00.920945] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.112 23:47:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.W6R3PanVTz 00:07:07.112 23:47:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:07.112 23:47:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:07.112 23:47:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:07.112 23:47:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:07.112 23:47:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:07.112 23:47:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:07.112 23:47:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:07.112 00:07:07.112 real 0m3.170s 00:07:07.112 user 0m4.037s 00:07:07.112 sys 0m0.512s 00:07:07.112 23:47:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.112 ************************************ 00:07:07.112 END TEST raid_write_error_test 00:07:07.112 ************************************ 00:07:07.112 23:47:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.112 23:47:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:07.112 23:47:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:07.112 23:47:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:07.112 23:47:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.112 23:47:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.112 ************************************ 00:07:07.112 START TEST raid_state_function_test 00:07:07.112 ************************************ 00:07:07.112 23:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:07:07.112 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73785 00:07:07.372 Process raid pid: 73785 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73785' 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73785 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73785 ']' 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:07.372 23:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.372 [2024-11-02 23:47:01.295780] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:07.372 [2024-11-02 23:47:01.295891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.372 [2024-11-02 23:47:01.451976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.633 [2024-11-02 23:47:01.477820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.633 [2024-11-02 23:47:01.520945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.633 [2024-11-02 23:47:01.520982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.210 [2024-11-02 23:47:02.134319] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:08.210 [2024-11-02 23:47:02.134446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:08.210 [2024-11-02 23:47:02.134493] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.210 [2024-11-02 23:47:02.134520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.210 "name": "Existed_Raid", 00:07:08.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.210 "strip_size_kb": 0, 00:07:08.210 "state": "configuring", 00:07:08.210 "raid_level": "raid1", 00:07:08.210 "superblock": false, 00:07:08.210 "num_base_bdevs": 2, 00:07:08.210 "num_base_bdevs_discovered": 0, 00:07:08.210 "num_base_bdevs_operational": 2, 00:07:08.210 "base_bdevs_list": [ 00:07:08.210 { 00:07:08.210 "name": "BaseBdev1", 00:07:08.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.210 "is_configured": false, 00:07:08.210 "data_offset": 0, 00:07:08.210 "data_size": 0 00:07:08.210 }, 00:07:08.210 { 00:07:08.210 "name": "BaseBdev2", 00:07:08.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.210 "is_configured": false, 00:07:08.210 "data_offset": 0, 00:07:08.210 "data_size": 0 00:07:08.210 } 00:07:08.210 ] 00:07:08.210 }' 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.210 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.781 [2024-11-02 23:47:02.573544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.781 [2024-11-02 23:47:02.573630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.781 [2024-11-02 23:47:02.585513] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:08.781 [2024-11-02 23:47:02.585555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:08.781 [2024-11-02 23:47:02.585563] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.781 [2024-11-02 23:47:02.585582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.781 [2024-11-02 23:47:02.606332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:08.781 BaseBdev1 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.781 [ 00:07:08.781 { 00:07:08.781 "name": "BaseBdev1", 00:07:08.781 "aliases": [ 00:07:08.781 "2e2b996c-3831-4f0a-8d82-e6b27199792e" 00:07:08.781 ], 00:07:08.781 "product_name": "Malloc disk", 00:07:08.781 "block_size": 512, 00:07:08.781 "num_blocks": 65536, 00:07:08.781 "uuid": "2e2b996c-3831-4f0a-8d82-e6b27199792e", 00:07:08.781 "assigned_rate_limits": { 00:07:08.781 "rw_ios_per_sec": 0, 00:07:08.781 "rw_mbytes_per_sec": 0, 00:07:08.781 "r_mbytes_per_sec": 0, 00:07:08.781 "w_mbytes_per_sec": 0 00:07:08.781 }, 00:07:08.781 "claimed": true, 00:07:08.781 "claim_type": "exclusive_write", 00:07:08.781 "zoned": false, 00:07:08.781 "supported_io_types": { 00:07:08.781 "read": true, 00:07:08.781 "write": true, 00:07:08.781 "unmap": true, 00:07:08.781 "flush": true, 00:07:08.781 "reset": true, 00:07:08.781 "nvme_admin": false, 00:07:08.781 "nvme_io": false, 00:07:08.781 "nvme_io_md": false, 00:07:08.781 "write_zeroes": true, 00:07:08.781 "zcopy": true, 00:07:08.781 "get_zone_info": false, 00:07:08.781 "zone_management": false, 00:07:08.781 "zone_append": false, 00:07:08.781 "compare": false, 00:07:08.781 "compare_and_write": false, 00:07:08.781 "abort": true, 00:07:08.781 "seek_hole": false, 00:07:08.781 "seek_data": false, 00:07:08.781 "copy": true, 00:07:08.781 "nvme_iov_md": false 00:07:08.781 }, 00:07:08.781 "memory_domains": [ 00:07:08.781 { 00:07:08.781 "dma_device_id": "system", 00:07:08.781 "dma_device_type": 1 00:07:08.781 }, 00:07:08.781 { 00:07:08.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.781 "dma_device_type": 2 00:07:08.781 } 00:07:08.781 ], 00:07:08.781 "driver_specific": {} 00:07:08.781 } 00:07:08.781 ] 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:08.781 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.782 "name": "Existed_Raid", 00:07:08.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.782 "strip_size_kb": 0, 00:07:08.782 "state": "configuring", 00:07:08.782 "raid_level": "raid1", 00:07:08.782 "superblock": false, 00:07:08.782 "num_base_bdevs": 2, 00:07:08.782 "num_base_bdevs_discovered": 1, 00:07:08.782 "num_base_bdevs_operational": 2, 00:07:08.782 "base_bdevs_list": [ 00:07:08.782 { 00:07:08.782 "name": "BaseBdev1", 00:07:08.782 "uuid": "2e2b996c-3831-4f0a-8d82-e6b27199792e", 00:07:08.782 "is_configured": true, 00:07:08.782 "data_offset": 0, 00:07:08.782 "data_size": 65536 00:07:08.782 }, 00:07:08.782 { 00:07:08.782 "name": "BaseBdev2", 00:07:08.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.782 "is_configured": false, 00:07:08.782 "data_offset": 0, 00:07:08.782 "data_size": 0 00:07:08.782 } 00:07:08.782 ] 00:07:08.782 }' 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.782 23:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.042 [2024-11-02 23:47:03.069572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:09.042 [2024-11-02 23:47:03.069664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.042 [2024-11-02 23:47:03.081557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.042 [2024-11-02 23:47:03.083387] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.042 [2024-11-02 23:47:03.083461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.042 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.302 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.302 "name": "Existed_Raid", 00:07:09.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.302 "strip_size_kb": 0, 00:07:09.302 "state": "configuring", 00:07:09.302 "raid_level": "raid1", 00:07:09.302 "superblock": false, 00:07:09.302 "num_base_bdevs": 2, 00:07:09.302 "num_base_bdevs_discovered": 1, 00:07:09.302 "num_base_bdevs_operational": 2, 00:07:09.302 "base_bdevs_list": [ 00:07:09.302 { 00:07:09.302 "name": "BaseBdev1", 00:07:09.302 "uuid": "2e2b996c-3831-4f0a-8d82-e6b27199792e", 00:07:09.302 "is_configured": true, 00:07:09.302 "data_offset": 0, 00:07:09.302 "data_size": 65536 00:07:09.302 }, 00:07:09.302 { 00:07:09.302 "name": "BaseBdev2", 00:07:09.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.302 "is_configured": false, 00:07:09.302 "data_offset": 0, 00:07:09.302 "data_size": 0 00:07:09.302 } 00:07:09.302 ] 00:07:09.302 }' 00:07:09.302 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.302 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.562 [2024-11-02 23:47:03.547593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.562 [2024-11-02 23:47:03.547720] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:09.562 [2024-11-02 23:47:03.547764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:09.562 [2024-11-02 23:47:03.548060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:09.562 [2024-11-02 23:47:03.548236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:09.562 [2024-11-02 23:47:03.548282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:09.562 [2024-11-02 23:47:03.548521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.562 BaseBdev2 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.562 [ 00:07:09.562 { 00:07:09.562 "name": "BaseBdev2", 00:07:09.562 "aliases": [ 00:07:09.562 "9e106d00-bbb0-4a80-aafc-fab59ec032cd" 00:07:09.562 ], 00:07:09.562 "product_name": "Malloc disk", 00:07:09.562 "block_size": 512, 00:07:09.562 "num_blocks": 65536, 00:07:09.562 "uuid": "9e106d00-bbb0-4a80-aafc-fab59ec032cd", 00:07:09.562 "assigned_rate_limits": { 00:07:09.562 "rw_ios_per_sec": 0, 00:07:09.562 "rw_mbytes_per_sec": 0, 00:07:09.562 "r_mbytes_per_sec": 0, 00:07:09.562 "w_mbytes_per_sec": 0 00:07:09.562 }, 00:07:09.562 "claimed": true, 00:07:09.562 "claim_type": "exclusive_write", 00:07:09.562 "zoned": false, 00:07:09.562 "supported_io_types": { 00:07:09.562 "read": true, 00:07:09.562 "write": true, 00:07:09.562 "unmap": true, 00:07:09.562 "flush": true, 00:07:09.562 "reset": true, 00:07:09.562 "nvme_admin": false, 00:07:09.562 "nvme_io": false, 00:07:09.562 "nvme_io_md": false, 00:07:09.562 "write_zeroes": true, 00:07:09.562 "zcopy": true, 00:07:09.562 "get_zone_info": false, 00:07:09.562 "zone_management": false, 00:07:09.562 "zone_append": false, 00:07:09.562 "compare": false, 00:07:09.562 "compare_and_write": false, 00:07:09.562 "abort": true, 00:07:09.562 "seek_hole": false, 00:07:09.562 "seek_data": false, 00:07:09.562 "copy": true, 00:07:09.562 "nvme_iov_md": false 00:07:09.562 }, 00:07:09.562 "memory_domains": [ 00:07:09.562 { 00:07:09.562 "dma_device_id": "system", 00:07:09.562 "dma_device_type": 1 00:07:09.562 }, 00:07:09.562 { 00:07:09.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.562 "dma_device_type": 2 00:07:09.562 } 00:07:09.562 ], 00:07:09.562 "driver_specific": {} 00:07:09.562 } 00:07:09.562 ] 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.562 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.563 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.563 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.563 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.563 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.563 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.563 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.563 "name": "Existed_Raid", 00:07:09.563 "uuid": "6093e070-36bf-4949-a963-cad4fc580c07", 00:07:09.563 "strip_size_kb": 0, 00:07:09.563 "state": "online", 00:07:09.563 "raid_level": "raid1", 00:07:09.563 "superblock": false, 00:07:09.563 "num_base_bdevs": 2, 00:07:09.563 "num_base_bdevs_discovered": 2, 00:07:09.563 "num_base_bdevs_operational": 2, 00:07:09.563 "base_bdevs_list": [ 00:07:09.563 { 00:07:09.563 "name": "BaseBdev1", 00:07:09.563 "uuid": "2e2b996c-3831-4f0a-8d82-e6b27199792e", 00:07:09.563 "is_configured": true, 00:07:09.563 "data_offset": 0, 00:07:09.563 "data_size": 65536 00:07:09.563 }, 00:07:09.563 { 00:07:09.563 "name": "BaseBdev2", 00:07:09.563 "uuid": "9e106d00-bbb0-4a80-aafc-fab59ec032cd", 00:07:09.563 "is_configured": true, 00:07:09.563 "data_offset": 0, 00:07:09.563 "data_size": 65536 00:07:09.563 } 00:07:09.563 ] 00:07:09.563 }' 00:07:09.563 23:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.563 23:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.131 [2024-11-02 23:47:04.019106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:10.131 "name": "Existed_Raid", 00:07:10.131 "aliases": [ 00:07:10.131 "6093e070-36bf-4949-a963-cad4fc580c07" 00:07:10.131 ], 00:07:10.131 "product_name": "Raid Volume", 00:07:10.131 "block_size": 512, 00:07:10.131 "num_blocks": 65536, 00:07:10.131 "uuid": "6093e070-36bf-4949-a963-cad4fc580c07", 00:07:10.131 "assigned_rate_limits": { 00:07:10.131 "rw_ios_per_sec": 0, 00:07:10.131 "rw_mbytes_per_sec": 0, 00:07:10.131 "r_mbytes_per_sec": 0, 00:07:10.131 "w_mbytes_per_sec": 0 00:07:10.131 }, 00:07:10.131 "claimed": false, 00:07:10.131 "zoned": false, 00:07:10.131 "supported_io_types": { 00:07:10.131 "read": true, 00:07:10.131 "write": true, 00:07:10.131 "unmap": false, 00:07:10.131 "flush": false, 00:07:10.131 "reset": true, 00:07:10.131 "nvme_admin": false, 00:07:10.131 "nvme_io": false, 00:07:10.131 "nvme_io_md": false, 00:07:10.131 "write_zeroes": true, 00:07:10.131 "zcopy": false, 00:07:10.131 "get_zone_info": false, 00:07:10.131 "zone_management": false, 00:07:10.131 "zone_append": false, 00:07:10.131 "compare": false, 00:07:10.131 "compare_and_write": false, 00:07:10.131 "abort": false, 00:07:10.131 "seek_hole": false, 00:07:10.131 "seek_data": false, 00:07:10.131 "copy": false, 00:07:10.131 "nvme_iov_md": false 00:07:10.131 }, 00:07:10.131 "memory_domains": [ 00:07:10.131 { 00:07:10.131 "dma_device_id": "system", 00:07:10.131 "dma_device_type": 1 00:07:10.131 }, 00:07:10.131 { 00:07:10.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.131 "dma_device_type": 2 00:07:10.131 }, 00:07:10.131 { 00:07:10.131 "dma_device_id": "system", 00:07:10.131 "dma_device_type": 1 00:07:10.131 }, 00:07:10.131 { 00:07:10.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.131 "dma_device_type": 2 00:07:10.131 } 00:07:10.131 ], 00:07:10.131 "driver_specific": { 00:07:10.131 "raid": { 00:07:10.131 "uuid": "6093e070-36bf-4949-a963-cad4fc580c07", 00:07:10.131 "strip_size_kb": 0, 00:07:10.131 "state": "online", 00:07:10.131 "raid_level": "raid1", 00:07:10.131 "superblock": false, 00:07:10.131 "num_base_bdevs": 2, 00:07:10.131 "num_base_bdevs_discovered": 2, 00:07:10.131 "num_base_bdevs_operational": 2, 00:07:10.131 "base_bdevs_list": [ 00:07:10.131 { 00:07:10.131 "name": "BaseBdev1", 00:07:10.131 "uuid": "2e2b996c-3831-4f0a-8d82-e6b27199792e", 00:07:10.131 "is_configured": true, 00:07:10.131 "data_offset": 0, 00:07:10.131 "data_size": 65536 00:07:10.131 }, 00:07:10.131 { 00:07:10.131 "name": "BaseBdev2", 00:07:10.131 "uuid": "9e106d00-bbb0-4a80-aafc-fab59ec032cd", 00:07:10.131 "is_configured": true, 00:07:10.131 "data_offset": 0, 00:07:10.131 "data_size": 65536 00:07:10.131 } 00:07:10.131 ] 00:07:10.131 } 00:07:10.131 } 00:07:10.131 }' 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:10.131 BaseBdev2' 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:10.131 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.132 [2024-11-02 23:47:04.202603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.132 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.391 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.391 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.391 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.391 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.391 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.391 "name": "Existed_Raid", 00:07:10.391 "uuid": "6093e070-36bf-4949-a963-cad4fc580c07", 00:07:10.391 "strip_size_kb": 0, 00:07:10.391 "state": "online", 00:07:10.391 "raid_level": "raid1", 00:07:10.391 "superblock": false, 00:07:10.391 "num_base_bdevs": 2, 00:07:10.391 "num_base_bdevs_discovered": 1, 00:07:10.391 "num_base_bdevs_operational": 1, 00:07:10.391 "base_bdevs_list": [ 00:07:10.391 { 00:07:10.391 "name": null, 00:07:10.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.391 "is_configured": false, 00:07:10.391 "data_offset": 0, 00:07:10.391 "data_size": 65536 00:07:10.391 }, 00:07:10.391 { 00:07:10.391 "name": "BaseBdev2", 00:07:10.391 "uuid": "9e106d00-bbb0-4a80-aafc-fab59ec032cd", 00:07:10.391 "is_configured": true, 00:07:10.391 "data_offset": 0, 00:07:10.391 "data_size": 65536 00:07:10.391 } 00:07:10.391 ] 00:07:10.391 }' 00:07:10.391 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.391 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.651 [2024-11-02 23:47:04.697042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:10.651 [2024-11-02 23:47:04.697173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.651 [2024-11-02 23:47:04.708692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.651 [2024-11-02 23:47:04.708814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.651 [2024-11-02 23:47:04.708893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.651 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73785 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73785 ']' 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73785 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73785 00:07:10.912 killing process with pid 73785 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73785' 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73785 00:07:10.912 [2024-11-02 23:47:04.802798] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.912 23:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73785 00:07:10.912 [2024-11-02 23:47:04.803798] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:11.173 00:07:11.173 real 0m3.809s 00:07:11.173 user 0m6.037s 00:07:11.173 sys 0m0.734s 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.173 ************************************ 00:07:11.173 END TEST raid_state_function_test 00:07:11.173 ************************************ 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.173 23:47:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:11.173 23:47:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:11.173 23:47:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.173 23:47:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.173 ************************************ 00:07:11.173 START TEST raid_state_function_test_sb 00:07:11.173 ************************************ 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:11.173 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74025 00:07:11.174 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.174 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74025' 00:07:11.174 Process raid pid: 74025 00:07:11.174 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74025 00:07:11.174 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74025 ']' 00:07:11.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.174 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.174 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.174 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.174 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.174 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.174 [2024-11-02 23:47:05.170261] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:11.174 [2024-11-02 23:47:05.170471] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.434 [2024-11-02 23:47:05.324195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.434 [2024-11-02 23:47:05.350175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.434 [2024-11-02 23:47:05.391729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.434 [2024-11-02 23:47:05.391772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.004 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.004 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:12.004 23:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.004 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.004 23:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.004 [2024-11-02 23:47:06.004479] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:12.004 [2024-11-02 23:47:06.004599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:12.004 [2024-11-02 23:47:06.004634] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.004 [2024-11-02 23:47:06.004659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.004 "name": "Existed_Raid", 00:07:12.004 "uuid": "5b8c12e8-3459-461a-a87b-2f4b1622aa4d", 00:07:12.004 "strip_size_kb": 0, 00:07:12.004 "state": "configuring", 00:07:12.004 "raid_level": "raid1", 00:07:12.004 "superblock": true, 00:07:12.004 "num_base_bdevs": 2, 00:07:12.004 "num_base_bdevs_discovered": 0, 00:07:12.004 "num_base_bdevs_operational": 2, 00:07:12.004 "base_bdevs_list": [ 00:07:12.004 { 00:07:12.004 "name": "BaseBdev1", 00:07:12.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.004 "is_configured": false, 00:07:12.004 "data_offset": 0, 00:07:12.004 "data_size": 0 00:07:12.004 }, 00:07:12.004 { 00:07:12.004 "name": "BaseBdev2", 00:07:12.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.004 "is_configured": false, 00:07:12.004 "data_offset": 0, 00:07:12.004 "data_size": 0 00:07:12.004 } 00:07:12.004 ] 00:07:12.004 }' 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.004 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.574 [2024-11-02 23:47:06.419690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:12.574 [2024-11-02 23:47:06.419810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.574 [2024-11-02 23:47:06.431664] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:12.574 [2024-11-02 23:47:06.431747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:12.574 [2024-11-02 23:47:06.431775] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.574 [2024-11-02 23:47:06.431809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.574 [2024-11-02 23:47:06.452340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.574 BaseBdev1 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.574 [ 00:07:12.574 { 00:07:12.574 "name": "BaseBdev1", 00:07:12.574 "aliases": [ 00:07:12.574 "235fd04a-17a5-4432-a5f1-f61893ee7aa5" 00:07:12.574 ], 00:07:12.574 "product_name": "Malloc disk", 00:07:12.574 "block_size": 512, 00:07:12.574 "num_blocks": 65536, 00:07:12.574 "uuid": "235fd04a-17a5-4432-a5f1-f61893ee7aa5", 00:07:12.574 "assigned_rate_limits": { 00:07:12.574 "rw_ios_per_sec": 0, 00:07:12.574 "rw_mbytes_per_sec": 0, 00:07:12.574 "r_mbytes_per_sec": 0, 00:07:12.574 "w_mbytes_per_sec": 0 00:07:12.574 }, 00:07:12.574 "claimed": true, 00:07:12.574 "claim_type": "exclusive_write", 00:07:12.574 "zoned": false, 00:07:12.574 "supported_io_types": { 00:07:12.574 "read": true, 00:07:12.574 "write": true, 00:07:12.574 "unmap": true, 00:07:12.574 "flush": true, 00:07:12.574 "reset": true, 00:07:12.574 "nvme_admin": false, 00:07:12.574 "nvme_io": false, 00:07:12.574 "nvme_io_md": false, 00:07:12.574 "write_zeroes": true, 00:07:12.574 "zcopy": true, 00:07:12.574 "get_zone_info": false, 00:07:12.574 "zone_management": false, 00:07:12.574 "zone_append": false, 00:07:12.574 "compare": false, 00:07:12.574 "compare_and_write": false, 00:07:12.574 "abort": true, 00:07:12.574 "seek_hole": false, 00:07:12.574 "seek_data": false, 00:07:12.574 "copy": true, 00:07:12.574 "nvme_iov_md": false 00:07:12.574 }, 00:07:12.574 "memory_domains": [ 00:07:12.574 { 00:07:12.574 "dma_device_id": "system", 00:07:12.574 "dma_device_type": 1 00:07:12.574 }, 00:07:12.574 { 00:07:12.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.574 "dma_device_type": 2 00:07:12.574 } 00:07:12.574 ], 00:07:12.574 "driver_specific": {} 00:07:12.574 } 00:07:12.574 ] 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.574 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.575 "name": "Existed_Raid", 00:07:12.575 "uuid": "b87da6b5-2fcc-49bf-bde4-e0ace3dac68e", 00:07:12.575 "strip_size_kb": 0, 00:07:12.575 "state": "configuring", 00:07:12.575 "raid_level": "raid1", 00:07:12.575 "superblock": true, 00:07:12.575 "num_base_bdevs": 2, 00:07:12.575 "num_base_bdevs_discovered": 1, 00:07:12.575 "num_base_bdevs_operational": 2, 00:07:12.575 "base_bdevs_list": [ 00:07:12.575 { 00:07:12.575 "name": "BaseBdev1", 00:07:12.575 "uuid": "235fd04a-17a5-4432-a5f1-f61893ee7aa5", 00:07:12.575 "is_configured": true, 00:07:12.575 "data_offset": 2048, 00:07:12.575 "data_size": 63488 00:07:12.575 }, 00:07:12.575 { 00:07:12.575 "name": "BaseBdev2", 00:07:12.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.575 "is_configured": false, 00:07:12.575 "data_offset": 0, 00:07:12.575 "data_size": 0 00:07:12.575 } 00:07:12.575 ] 00:07:12.575 }' 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.575 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.835 [2024-11-02 23:47:06.903634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:12.835 [2024-11-02 23:47:06.903727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.835 [2024-11-02 23:47:06.915628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.835 [2024-11-02 23:47:06.917454] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.835 [2024-11-02 23:47:06.917543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.835 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.097 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.097 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.097 "name": "Existed_Raid", 00:07:13.097 "uuid": "5bcbece6-1a87-4b01-b0a9-13a8d8f49922", 00:07:13.097 "strip_size_kb": 0, 00:07:13.097 "state": "configuring", 00:07:13.097 "raid_level": "raid1", 00:07:13.097 "superblock": true, 00:07:13.097 "num_base_bdevs": 2, 00:07:13.097 "num_base_bdevs_discovered": 1, 00:07:13.097 "num_base_bdevs_operational": 2, 00:07:13.097 "base_bdevs_list": [ 00:07:13.097 { 00:07:13.097 "name": "BaseBdev1", 00:07:13.097 "uuid": "235fd04a-17a5-4432-a5f1-f61893ee7aa5", 00:07:13.097 "is_configured": true, 00:07:13.097 "data_offset": 2048, 00:07:13.097 "data_size": 63488 00:07:13.097 }, 00:07:13.097 { 00:07:13.097 "name": "BaseBdev2", 00:07:13.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.097 "is_configured": false, 00:07:13.097 "data_offset": 0, 00:07:13.097 "data_size": 0 00:07:13.097 } 00:07:13.097 ] 00:07:13.097 }' 00:07:13.097 23:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.097 23:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.357 [2024-11-02 23:47:07.373618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:13.357 [2024-11-02 23:47:07.373906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:13.357 [2024-11-02 23:47:07.373966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:13.357 [2024-11-02 23:47:07.374245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:13.357 BaseBdev2 00:07:13.357 [2024-11-02 23:47:07.374425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:13.357 [2024-11-02 23:47:07.374446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:13.357 [2024-11-02 23:47:07.374562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.357 [ 00:07:13.357 { 00:07:13.357 "name": "BaseBdev2", 00:07:13.357 "aliases": [ 00:07:13.357 "d80f9cc3-6c5e-4829-9fdb-ed77b70e130b" 00:07:13.357 ], 00:07:13.357 "product_name": "Malloc disk", 00:07:13.357 "block_size": 512, 00:07:13.357 "num_blocks": 65536, 00:07:13.357 "uuid": "d80f9cc3-6c5e-4829-9fdb-ed77b70e130b", 00:07:13.357 "assigned_rate_limits": { 00:07:13.357 "rw_ios_per_sec": 0, 00:07:13.357 "rw_mbytes_per_sec": 0, 00:07:13.357 "r_mbytes_per_sec": 0, 00:07:13.357 "w_mbytes_per_sec": 0 00:07:13.357 }, 00:07:13.357 "claimed": true, 00:07:13.357 "claim_type": "exclusive_write", 00:07:13.357 "zoned": false, 00:07:13.357 "supported_io_types": { 00:07:13.357 "read": true, 00:07:13.357 "write": true, 00:07:13.357 "unmap": true, 00:07:13.357 "flush": true, 00:07:13.357 "reset": true, 00:07:13.357 "nvme_admin": false, 00:07:13.357 "nvme_io": false, 00:07:13.357 "nvme_io_md": false, 00:07:13.357 "write_zeroes": true, 00:07:13.357 "zcopy": true, 00:07:13.357 "get_zone_info": false, 00:07:13.357 "zone_management": false, 00:07:13.357 "zone_append": false, 00:07:13.357 "compare": false, 00:07:13.357 "compare_and_write": false, 00:07:13.357 "abort": true, 00:07:13.357 "seek_hole": false, 00:07:13.357 "seek_data": false, 00:07:13.357 "copy": true, 00:07:13.357 "nvme_iov_md": false 00:07:13.357 }, 00:07:13.357 "memory_domains": [ 00:07:13.357 { 00:07:13.357 "dma_device_id": "system", 00:07:13.357 "dma_device_type": 1 00:07:13.357 }, 00:07:13.357 { 00:07:13.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.357 "dma_device_type": 2 00:07:13.357 } 00:07:13.357 ], 00:07:13.357 "driver_specific": {} 00:07:13.357 } 00:07:13.357 ] 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.357 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.617 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.617 "name": "Existed_Raid", 00:07:13.617 "uuid": "5bcbece6-1a87-4b01-b0a9-13a8d8f49922", 00:07:13.617 "strip_size_kb": 0, 00:07:13.617 "state": "online", 00:07:13.618 "raid_level": "raid1", 00:07:13.618 "superblock": true, 00:07:13.618 "num_base_bdevs": 2, 00:07:13.618 "num_base_bdevs_discovered": 2, 00:07:13.618 "num_base_bdevs_operational": 2, 00:07:13.618 "base_bdevs_list": [ 00:07:13.618 { 00:07:13.618 "name": "BaseBdev1", 00:07:13.618 "uuid": "235fd04a-17a5-4432-a5f1-f61893ee7aa5", 00:07:13.618 "is_configured": true, 00:07:13.618 "data_offset": 2048, 00:07:13.618 "data_size": 63488 00:07:13.618 }, 00:07:13.618 { 00:07:13.618 "name": "BaseBdev2", 00:07:13.618 "uuid": "d80f9cc3-6c5e-4829-9fdb-ed77b70e130b", 00:07:13.618 "is_configured": true, 00:07:13.618 "data_offset": 2048, 00:07:13.618 "data_size": 63488 00:07:13.618 } 00:07:13.618 ] 00:07:13.618 }' 00:07:13.618 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.618 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.876 [2024-11-02 23:47:07.829153] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:13.876 "name": "Existed_Raid", 00:07:13.876 "aliases": [ 00:07:13.876 "5bcbece6-1a87-4b01-b0a9-13a8d8f49922" 00:07:13.876 ], 00:07:13.876 "product_name": "Raid Volume", 00:07:13.876 "block_size": 512, 00:07:13.876 "num_blocks": 63488, 00:07:13.876 "uuid": "5bcbece6-1a87-4b01-b0a9-13a8d8f49922", 00:07:13.876 "assigned_rate_limits": { 00:07:13.876 "rw_ios_per_sec": 0, 00:07:13.876 "rw_mbytes_per_sec": 0, 00:07:13.876 "r_mbytes_per_sec": 0, 00:07:13.876 "w_mbytes_per_sec": 0 00:07:13.876 }, 00:07:13.876 "claimed": false, 00:07:13.876 "zoned": false, 00:07:13.876 "supported_io_types": { 00:07:13.876 "read": true, 00:07:13.876 "write": true, 00:07:13.876 "unmap": false, 00:07:13.876 "flush": false, 00:07:13.876 "reset": true, 00:07:13.876 "nvme_admin": false, 00:07:13.876 "nvme_io": false, 00:07:13.876 "nvme_io_md": false, 00:07:13.876 "write_zeroes": true, 00:07:13.876 "zcopy": false, 00:07:13.876 "get_zone_info": false, 00:07:13.876 "zone_management": false, 00:07:13.876 "zone_append": false, 00:07:13.876 "compare": false, 00:07:13.876 "compare_and_write": false, 00:07:13.876 "abort": false, 00:07:13.876 "seek_hole": false, 00:07:13.876 "seek_data": false, 00:07:13.876 "copy": false, 00:07:13.876 "nvme_iov_md": false 00:07:13.876 }, 00:07:13.876 "memory_domains": [ 00:07:13.876 { 00:07:13.876 "dma_device_id": "system", 00:07:13.876 "dma_device_type": 1 00:07:13.876 }, 00:07:13.876 { 00:07:13.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.876 "dma_device_type": 2 00:07:13.876 }, 00:07:13.876 { 00:07:13.876 "dma_device_id": "system", 00:07:13.876 "dma_device_type": 1 00:07:13.876 }, 00:07:13.876 { 00:07:13.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.876 "dma_device_type": 2 00:07:13.876 } 00:07:13.876 ], 00:07:13.876 "driver_specific": { 00:07:13.876 "raid": { 00:07:13.876 "uuid": "5bcbece6-1a87-4b01-b0a9-13a8d8f49922", 00:07:13.876 "strip_size_kb": 0, 00:07:13.876 "state": "online", 00:07:13.876 "raid_level": "raid1", 00:07:13.876 "superblock": true, 00:07:13.876 "num_base_bdevs": 2, 00:07:13.876 "num_base_bdevs_discovered": 2, 00:07:13.876 "num_base_bdevs_operational": 2, 00:07:13.876 "base_bdevs_list": [ 00:07:13.876 { 00:07:13.876 "name": "BaseBdev1", 00:07:13.876 "uuid": "235fd04a-17a5-4432-a5f1-f61893ee7aa5", 00:07:13.876 "is_configured": true, 00:07:13.876 "data_offset": 2048, 00:07:13.876 "data_size": 63488 00:07:13.876 }, 00:07:13.876 { 00:07:13.876 "name": "BaseBdev2", 00:07:13.876 "uuid": "d80f9cc3-6c5e-4829-9fdb-ed77b70e130b", 00:07:13.876 "is_configured": true, 00:07:13.876 "data_offset": 2048, 00:07:13.876 "data_size": 63488 00:07:13.876 } 00:07:13.876 ] 00:07:13.876 } 00:07:13.876 } 00:07:13.876 }' 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:13.876 BaseBdev2' 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.876 23:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.136 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:14.136 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:14.136 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.136 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:14.136 23:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.136 [2024-11-02 23:47:08.052568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.136 "name": "Existed_Raid", 00:07:14.136 "uuid": "5bcbece6-1a87-4b01-b0a9-13a8d8f49922", 00:07:14.136 "strip_size_kb": 0, 00:07:14.136 "state": "online", 00:07:14.136 "raid_level": "raid1", 00:07:14.136 "superblock": true, 00:07:14.136 "num_base_bdevs": 2, 00:07:14.136 "num_base_bdevs_discovered": 1, 00:07:14.136 "num_base_bdevs_operational": 1, 00:07:14.136 "base_bdevs_list": [ 00:07:14.136 { 00:07:14.136 "name": null, 00:07:14.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.136 "is_configured": false, 00:07:14.136 "data_offset": 0, 00:07:14.136 "data_size": 63488 00:07:14.136 }, 00:07:14.136 { 00:07:14.136 "name": "BaseBdev2", 00:07:14.136 "uuid": "d80f9cc3-6c5e-4829-9fdb-ed77b70e130b", 00:07:14.136 "is_configured": true, 00:07:14.136 "data_offset": 2048, 00:07:14.136 "data_size": 63488 00:07:14.136 } 00:07:14.136 ] 00:07:14.136 }' 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.136 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.706 [2024-11-02 23:47:08.591162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:14.706 [2024-11-02 23:47:08.591400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.706 [2024-11-02 23:47:08.612833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.706 [2024-11-02 23:47:08.612971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.706 [2024-11-02 23:47:08.613023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74025 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74025 ']' 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 74025 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74025 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74025' 00:07:14.706 killing process with pid 74025 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 74025 00:07:14.706 [2024-11-02 23:47:08.702179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.706 23:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 74025 00:07:14.707 [2024-11-02 23:47:08.703831] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.969 23:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:14.969 ************************************ 00:07:14.969 END TEST raid_state_function_test_sb 00:07:14.969 ************************************ 00:07:14.969 00:07:14.969 real 0m3.948s 00:07:14.969 user 0m6.162s 00:07:14.969 sys 0m0.747s 00:07:14.969 23:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:14.969 23:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.229 23:47:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:15.229 23:47:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:15.229 23:47:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.229 23:47:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.229 ************************************ 00:07:15.229 START TEST raid_superblock_test 00:07:15.229 ************************************ 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:15.229 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74266 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74266 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74266 ']' 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:15.230 23:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.230 [2024-11-02 23:47:09.185267] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:15.230 [2024-11-02 23:47:09.185482] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74266 ] 00:07:15.489 [2024-11-02 23:47:09.340463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.489 [2024-11-02 23:47:09.383603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.489 [2024-11-02 23:47:09.460430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.489 [2024-11-02 23:47:09.460592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.072 malloc1 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.072 [2024-11-02 23:47:10.043705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:16.072 [2024-11-02 23:47:10.043910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.072 [2024-11-02 23:47:10.043955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:16.072 [2024-11-02 23:47:10.044006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.072 [2024-11-02 23:47:10.046517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.072 [2024-11-02 23:47:10.046610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:16.072 pt1 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.072 malloc2 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.072 [2024-11-02 23:47:10.078677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:16.072 [2024-11-02 23:47:10.078835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.072 [2024-11-02 23:47:10.078877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:16.072 [2024-11-02 23:47:10.078913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.072 [2024-11-02 23:47:10.081216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.072 [2024-11-02 23:47:10.081299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:16.072 pt2 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.072 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.072 [2024-11-02 23:47:10.090685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:16.072 [2024-11-02 23:47:10.092806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:16.072 [2024-11-02 23:47:10.093008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:16.072 [2024-11-02 23:47:10.093064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:16.072 [2024-11-02 23:47:10.093359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:16.072 [2024-11-02 23:47:10.093574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:16.072 [2024-11-02 23:47:10.093621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:16.073 [2024-11-02 23:47:10.093835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.073 "name": "raid_bdev1", 00:07:16.073 "uuid": "7bbab532-c60f-44c9-bc08-89a0737c43af", 00:07:16.073 "strip_size_kb": 0, 00:07:16.073 "state": "online", 00:07:16.073 "raid_level": "raid1", 00:07:16.073 "superblock": true, 00:07:16.073 "num_base_bdevs": 2, 00:07:16.073 "num_base_bdevs_discovered": 2, 00:07:16.073 "num_base_bdevs_operational": 2, 00:07:16.073 "base_bdevs_list": [ 00:07:16.073 { 00:07:16.073 "name": "pt1", 00:07:16.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.073 "is_configured": true, 00:07:16.073 "data_offset": 2048, 00:07:16.073 "data_size": 63488 00:07:16.073 }, 00:07:16.073 { 00:07:16.073 "name": "pt2", 00:07:16.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.073 "is_configured": true, 00:07:16.073 "data_offset": 2048, 00:07:16.073 "data_size": 63488 00:07:16.073 } 00:07:16.073 ] 00:07:16.073 }' 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.073 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.646 [2024-11-02 23:47:10.534633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.646 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.646 "name": "raid_bdev1", 00:07:16.646 "aliases": [ 00:07:16.646 "7bbab532-c60f-44c9-bc08-89a0737c43af" 00:07:16.646 ], 00:07:16.646 "product_name": "Raid Volume", 00:07:16.646 "block_size": 512, 00:07:16.646 "num_blocks": 63488, 00:07:16.646 "uuid": "7bbab532-c60f-44c9-bc08-89a0737c43af", 00:07:16.646 "assigned_rate_limits": { 00:07:16.646 "rw_ios_per_sec": 0, 00:07:16.646 "rw_mbytes_per_sec": 0, 00:07:16.646 "r_mbytes_per_sec": 0, 00:07:16.646 "w_mbytes_per_sec": 0 00:07:16.646 }, 00:07:16.646 "claimed": false, 00:07:16.646 "zoned": false, 00:07:16.646 "supported_io_types": { 00:07:16.646 "read": true, 00:07:16.646 "write": true, 00:07:16.646 "unmap": false, 00:07:16.646 "flush": false, 00:07:16.646 "reset": true, 00:07:16.646 "nvme_admin": false, 00:07:16.646 "nvme_io": false, 00:07:16.646 "nvme_io_md": false, 00:07:16.646 "write_zeroes": true, 00:07:16.646 "zcopy": false, 00:07:16.646 "get_zone_info": false, 00:07:16.646 "zone_management": false, 00:07:16.646 "zone_append": false, 00:07:16.646 "compare": false, 00:07:16.646 "compare_and_write": false, 00:07:16.646 "abort": false, 00:07:16.646 "seek_hole": false, 00:07:16.646 "seek_data": false, 00:07:16.646 "copy": false, 00:07:16.646 "nvme_iov_md": false 00:07:16.646 }, 00:07:16.646 "memory_domains": [ 00:07:16.646 { 00:07:16.646 "dma_device_id": "system", 00:07:16.646 "dma_device_type": 1 00:07:16.646 }, 00:07:16.646 { 00:07:16.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.646 "dma_device_type": 2 00:07:16.646 }, 00:07:16.647 { 00:07:16.647 "dma_device_id": "system", 00:07:16.647 "dma_device_type": 1 00:07:16.647 }, 00:07:16.647 { 00:07:16.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.647 "dma_device_type": 2 00:07:16.647 } 00:07:16.647 ], 00:07:16.647 "driver_specific": { 00:07:16.647 "raid": { 00:07:16.647 "uuid": "7bbab532-c60f-44c9-bc08-89a0737c43af", 00:07:16.647 "strip_size_kb": 0, 00:07:16.647 "state": "online", 00:07:16.647 "raid_level": "raid1", 00:07:16.647 "superblock": true, 00:07:16.647 "num_base_bdevs": 2, 00:07:16.647 "num_base_bdevs_discovered": 2, 00:07:16.647 "num_base_bdevs_operational": 2, 00:07:16.647 "base_bdevs_list": [ 00:07:16.647 { 00:07:16.647 "name": "pt1", 00:07:16.647 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.647 "is_configured": true, 00:07:16.647 "data_offset": 2048, 00:07:16.647 "data_size": 63488 00:07:16.647 }, 00:07:16.647 { 00:07:16.647 "name": "pt2", 00:07:16.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.647 "is_configured": true, 00:07:16.647 "data_offset": 2048, 00:07:16.647 "data_size": 63488 00:07:16.647 } 00:07:16.647 ] 00:07:16.647 } 00:07:16.647 } 00:07:16.647 }' 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:16.647 pt2' 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.647 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.908 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.908 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.908 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:16.908 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:16.908 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.908 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.908 [2024-11-02 23:47:10.762115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.908 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.908 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7bbab532-c60f-44c9-bc08-89a0737c43af 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7bbab532-c60f-44c9-bc08-89a0737c43af ']' 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 [2024-11-02 23:47:10.789820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.909 [2024-11-02 23:47:10.789929] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.909 [2024-11-02 23:47:10.790098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.909 [2024-11-02 23:47:10.790238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.909 [2024-11-02 23:47:10.790300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 [2024-11-02 23:47:10.913570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:16.909 [2024-11-02 23:47:10.915897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:16.909 [2024-11-02 23:47:10.915982] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:16.909 [2024-11-02 23:47:10.916036] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:16.909 [2024-11-02 23:47:10.916055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.909 [2024-11-02 23:47:10.916082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:16.909 request: 00:07:16.909 { 00:07:16.909 "name": "raid_bdev1", 00:07:16.909 "raid_level": "raid1", 00:07:16.909 "base_bdevs": [ 00:07:16.909 "malloc1", 00:07:16.909 "malloc2" 00:07:16.909 ], 00:07:16.909 "superblock": false, 00:07:16.909 "method": "bdev_raid_create", 00:07:16.909 "req_id": 1 00:07:16.909 } 00:07:16.909 Got JSON-RPC error response 00:07:16.909 response: 00:07:16.909 { 00:07:16.909 "code": -17, 00:07:16.909 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:16.909 } 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 [2024-11-02 23:47:10.977542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:16.909 [2024-11-02 23:47:10.977752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.909 [2024-11-02 23:47:10.977805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:16.909 [2024-11-02 23:47:10.977849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.909 [2024-11-02 23:47:10.980550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.909 [2024-11-02 23:47:10.980655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:16.909 [2024-11-02 23:47:10.980825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:16.909 [2024-11-02 23:47:10.980910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:16.909 pt1 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.909 23:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.168 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.168 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.168 "name": "raid_bdev1", 00:07:17.168 "uuid": "7bbab532-c60f-44c9-bc08-89a0737c43af", 00:07:17.168 "strip_size_kb": 0, 00:07:17.168 "state": "configuring", 00:07:17.168 "raid_level": "raid1", 00:07:17.168 "superblock": true, 00:07:17.168 "num_base_bdevs": 2, 00:07:17.168 "num_base_bdevs_discovered": 1, 00:07:17.168 "num_base_bdevs_operational": 2, 00:07:17.168 "base_bdevs_list": [ 00:07:17.168 { 00:07:17.168 "name": "pt1", 00:07:17.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.169 "is_configured": true, 00:07:17.169 "data_offset": 2048, 00:07:17.169 "data_size": 63488 00:07:17.169 }, 00:07:17.169 { 00:07:17.169 "name": null, 00:07:17.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.169 "is_configured": false, 00:07:17.169 "data_offset": 2048, 00:07:17.169 "data_size": 63488 00:07:17.169 } 00:07:17.169 ] 00:07:17.169 }' 00:07:17.169 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.169 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.428 [2024-11-02 23:47:11.444725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:17.428 [2024-11-02 23:47:11.444913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.428 [2024-11-02 23:47:11.444965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:17.428 [2024-11-02 23:47:11.445015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.428 [2024-11-02 23:47:11.445614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.428 [2024-11-02 23:47:11.445686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:17.428 [2024-11-02 23:47:11.445853] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:17.428 [2024-11-02 23:47:11.445919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:17.428 [2024-11-02 23:47:11.446086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:17.428 [2024-11-02 23:47:11.446130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:17.428 [2024-11-02 23:47:11.446473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:17.428 [2024-11-02 23:47:11.446676] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:17.428 [2024-11-02 23:47:11.446735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:17.428 [2024-11-02 23:47:11.446941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.428 pt2 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.428 "name": "raid_bdev1", 00:07:17.428 "uuid": "7bbab532-c60f-44c9-bc08-89a0737c43af", 00:07:17.428 "strip_size_kb": 0, 00:07:17.428 "state": "online", 00:07:17.428 "raid_level": "raid1", 00:07:17.428 "superblock": true, 00:07:17.428 "num_base_bdevs": 2, 00:07:17.428 "num_base_bdevs_discovered": 2, 00:07:17.428 "num_base_bdevs_operational": 2, 00:07:17.428 "base_bdevs_list": [ 00:07:17.428 { 00:07:17.428 "name": "pt1", 00:07:17.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.428 "is_configured": true, 00:07:17.428 "data_offset": 2048, 00:07:17.428 "data_size": 63488 00:07:17.428 }, 00:07:17.428 { 00:07:17.428 "name": "pt2", 00:07:17.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.428 "is_configured": true, 00:07:17.428 "data_offset": 2048, 00:07:17.428 "data_size": 63488 00:07:17.428 } 00:07:17.428 ] 00:07:17.428 }' 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.428 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.997 [2024-11-02 23:47:11.868197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:17.997 "name": "raid_bdev1", 00:07:17.997 "aliases": [ 00:07:17.997 "7bbab532-c60f-44c9-bc08-89a0737c43af" 00:07:17.997 ], 00:07:17.997 "product_name": "Raid Volume", 00:07:17.997 "block_size": 512, 00:07:17.997 "num_blocks": 63488, 00:07:17.997 "uuid": "7bbab532-c60f-44c9-bc08-89a0737c43af", 00:07:17.997 "assigned_rate_limits": { 00:07:17.997 "rw_ios_per_sec": 0, 00:07:17.997 "rw_mbytes_per_sec": 0, 00:07:17.997 "r_mbytes_per_sec": 0, 00:07:17.997 "w_mbytes_per_sec": 0 00:07:17.997 }, 00:07:17.997 "claimed": false, 00:07:17.997 "zoned": false, 00:07:17.997 "supported_io_types": { 00:07:17.997 "read": true, 00:07:17.997 "write": true, 00:07:17.997 "unmap": false, 00:07:17.997 "flush": false, 00:07:17.997 "reset": true, 00:07:17.997 "nvme_admin": false, 00:07:17.997 "nvme_io": false, 00:07:17.997 "nvme_io_md": false, 00:07:17.997 "write_zeroes": true, 00:07:17.997 "zcopy": false, 00:07:17.997 "get_zone_info": false, 00:07:17.997 "zone_management": false, 00:07:17.997 "zone_append": false, 00:07:17.997 "compare": false, 00:07:17.997 "compare_and_write": false, 00:07:17.997 "abort": false, 00:07:17.997 "seek_hole": false, 00:07:17.997 "seek_data": false, 00:07:17.997 "copy": false, 00:07:17.997 "nvme_iov_md": false 00:07:17.997 }, 00:07:17.997 "memory_domains": [ 00:07:17.997 { 00:07:17.997 "dma_device_id": "system", 00:07:17.997 "dma_device_type": 1 00:07:17.997 }, 00:07:17.997 { 00:07:17.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.997 "dma_device_type": 2 00:07:17.997 }, 00:07:17.997 { 00:07:17.997 "dma_device_id": "system", 00:07:17.997 "dma_device_type": 1 00:07:17.997 }, 00:07:17.997 { 00:07:17.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.997 "dma_device_type": 2 00:07:17.997 } 00:07:17.997 ], 00:07:17.997 "driver_specific": { 00:07:17.997 "raid": { 00:07:17.997 "uuid": "7bbab532-c60f-44c9-bc08-89a0737c43af", 00:07:17.997 "strip_size_kb": 0, 00:07:17.997 "state": "online", 00:07:17.997 "raid_level": "raid1", 00:07:17.997 "superblock": true, 00:07:17.997 "num_base_bdevs": 2, 00:07:17.997 "num_base_bdevs_discovered": 2, 00:07:17.997 "num_base_bdevs_operational": 2, 00:07:17.997 "base_bdevs_list": [ 00:07:17.997 { 00:07:17.997 "name": "pt1", 00:07:17.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.997 "is_configured": true, 00:07:17.997 "data_offset": 2048, 00:07:17.997 "data_size": 63488 00:07:17.997 }, 00:07:17.997 { 00:07:17.997 "name": "pt2", 00:07:17.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.997 "is_configured": true, 00:07:17.997 "data_offset": 2048, 00:07:17.997 "data_size": 63488 00:07:17.997 } 00:07:17.997 ] 00:07:17.997 } 00:07:17.997 } 00:07:17.997 }' 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:17.997 pt2' 00:07:17.997 23:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.997 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.256 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.256 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.257 [2024-11-02 23:47:12.115902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7bbab532-c60f-44c9-bc08-89a0737c43af '!=' 7bbab532-c60f-44c9-bc08-89a0737c43af ']' 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.257 [2024-11-02 23:47:12.143529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.257 "name": "raid_bdev1", 00:07:18.257 "uuid": "7bbab532-c60f-44c9-bc08-89a0737c43af", 00:07:18.257 "strip_size_kb": 0, 00:07:18.257 "state": "online", 00:07:18.257 "raid_level": "raid1", 00:07:18.257 "superblock": true, 00:07:18.257 "num_base_bdevs": 2, 00:07:18.257 "num_base_bdevs_discovered": 1, 00:07:18.257 "num_base_bdevs_operational": 1, 00:07:18.257 "base_bdevs_list": [ 00:07:18.257 { 00:07:18.257 "name": null, 00:07:18.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.257 "is_configured": false, 00:07:18.257 "data_offset": 0, 00:07:18.257 "data_size": 63488 00:07:18.257 }, 00:07:18.257 { 00:07:18.257 "name": "pt2", 00:07:18.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.257 "is_configured": true, 00:07:18.257 "data_offset": 2048, 00:07:18.257 "data_size": 63488 00:07:18.257 } 00:07:18.257 ] 00:07:18.257 }' 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.257 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.516 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:18.516 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.516 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.516 [2024-11-02 23:47:12.534909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.516 [2024-11-02 23:47:12.535042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.516 [2024-11-02 23:47:12.535189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.516 [2024-11-02 23:47:12.535293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.516 [2024-11-02 23:47:12.535347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:18.516 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.516 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.516 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:18.516 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.516 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.517 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.517 [2024-11-02 23:47:12.606696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:18.517 [2024-11-02 23:47:12.606854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.517 [2024-11-02 23:47:12.606903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:18.517 [2024-11-02 23:47:12.606945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.517 [2024-11-02 23:47:12.609478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.776 [2024-11-02 23:47:12.609564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:18.776 [2024-11-02 23:47:12.609673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:18.776 [2024-11-02 23:47:12.609711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:18.776 [2024-11-02 23:47:12.609854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:18.776 [2024-11-02 23:47:12.609865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:18.776 [2024-11-02 23:47:12.610134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:18.776 [2024-11-02 23:47:12.610258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:18.776 [2024-11-02 23:47:12.610272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:18.776 [2024-11-02 23:47:12.610396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.776 pt2 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.776 "name": "raid_bdev1", 00:07:18.776 "uuid": "7bbab532-c60f-44c9-bc08-89a0737c43af", 00:07:18.776 "strip_size_kb": 0, 00:07:18.776 "state": "online", 00:07:18.776 "raid_level": "raid1", 00:07:18.776 "superblock": true, 00:07:18.776 "num_base_bdevs": 2, 00:07:18.776 "num_base_bdevs_discovered": 1, 00:07:18.776 "num_base_bdevs_operational": 1, 00:07:18.776 "base_bdevs_list": [ 00:07:18.776 { 00:07:18.776 "name": null, 00:07:18.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.776 "is_configured": false, 00:07:18.776 "data_offset": 2048, 00:07:18.776 "data_size": 63488 00:07:18.776 }, 00:07:18.776 { 00:07:18.776 "name": "pt2", 00:07:18.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.776 "is_configured": true, 00:07:18.776 "data_offset": 2048, 00:07:18.776 "data_size": 63488 00:07:18.776 } 00:07:18.776 ] 00:07:18.776 }' 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.776 23:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.039 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.039 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.039 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.039 [2024-11-02 23:47:13.065986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.040 [2024-11-02 23:47:13.066072] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.040 [2024-11-02 23:47:13.066183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.040 [2024-11-02 23:47:13.066256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.040 [2024-11-02 23:47:13.066313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.040 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.040 [2024-11-02 23:47:13.125903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:19.040 [2024-11-02 23:47:13.125992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.040 [2024-11-02 23:47:13.126015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:07:19.040 [2024-11-02 23:47:13.126033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.040 [2024-11-02 23:47:13.128610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.040 [2024-11-02 23:47:13.128658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:19.040 [2024-11-02 23:47:13.128775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:19.040 [2024-11-02 23:47:13.128831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:19.040 [2024-11-02 23:47:13.128962] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:19.040 [2024-11-02 23:47:13.128978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.040 [2024-11-02 23:47:13.129007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:07:19.040 [2024-11-02 23:47:13.129043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:19.040 [2024-11-02 23:47:13.129122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:07:19.040 [2024-11-02 23:47:13.129135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:19.308 pt1 00:07:19.308 [2024-11-02 23:47:13.129395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:19.308 [2024-11-02 23:47:13.129542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:07:19.308 [2024-11-02 23:47:13.129553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:07:19.308 [2024-11-02 23:47:13.129685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.308 "name": "raid_bdev1", 00:07:19.308 "uuid": "7bbab532-c60f-44c9-bc08-89a0737c43af", 00:07:19.308 "strip_size_kb": 0, 00:07:19.308 "state": "online", 00:07:19.308 "raid_level": "raid1", 00:07:19.308 "superblock": true, 00:07:19.308 "num_base_bdevs": 2, 00:07:19.308 "num_base_bdevs_discovered": 1, 00:07:19.308 "num_base_bdevs_operational": 1, 00:07:19.308 "base_bdevs_list": [ 00:07:19.308 { 00:07:19.308 "name": null, 00:07:19.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.308 "is_configured": false, 00:07:19.308 "data_offset": 2048, 00:07:19.308 "data_size": 63488 00:07:19.308 }, 00:07:19.308 { 00:07:19.308 "name": "pt2", 00:07:19.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.308 "is_configured": true, 00:07:19.308 "data_offset": 2048, 00:07:19.308 "data_size": 63488 00:07:19.308 } 00:07:19.308 ] 00:07:19.308 }' 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.308 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:19.568 [2024-11-02 23:47:13.581467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7bbab532-c60f-44c9-bc08-89a0737c43af '!=' 7bbab532-c60f-44c9-bc08-89a0737c43af ']' 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74266 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74266 ']' 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74266 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74266 00:07:19.568 killing process with pid 74266 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74266' 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74266 00:07:19.568 [2024-11-02 23:47:13.656121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.568 [2024-11-02 23:47:13.656238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.568 23:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74266 00:07:19.568 [2024-11-02 23:47:13.656306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.568 [2024-11-02 23:47:13.656319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:07:19.827 [2024-11-02 23:47:13.699505] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.086 23:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:20.086 00:07:20.086 real 0m4.913s 00:07:20.086 user 0m7.873s 00:07:20.086 sys 0m1.089s 00:07:20.086 23:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:20.086 ************************************ 00:07:20.086 END TEST raid_superblock_test 00:07:20.086 23:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.086 ************************************ 00:07:20.086 23:47:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:20.086 23:47:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:20.086 23:47:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:20.086 23:47:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.086 ************************************ 00:07:20.086 START TEST raid_read_error_test 00:07:20.086 ************************************ 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5LLOlrJhFU 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74585 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74585 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 74585 ']' 00:07:20.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:20.086 23:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.345 [2024-11-02 23:47:14.194005] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:20.345 [2024-11-02 23:47:14.194131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74585 ] 00:07:20.345 [2024-11-02 23:47:14.351319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.345 [2024-11-02 23:47:14.392483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.604 [2024-11-02 23:47:14.469132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.604 [2024-11-02 23:47:14.469301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.177 BaseBdev1_malloc 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.177 true 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.177 [2024-11-02 23:47:15.075803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:21.177 [2024-11-02 23:47:15.075944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.177 [2024-11-02 23:47:15.075989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:21.177 [2024-11-02 23:47:15.076047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.177 [2024-11-02 23:47:15.078567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.177 [2024-11-02 23:47:15.078665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:21.177 BaseBdev1 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.177 BaseBdev2_malloc 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.177 true 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.177 [2024-11-02 23:47:15.122950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:21.177 [2024-11-02 23:47:15.123084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.177 [2024-11-02 23:47:15.123128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:21.177 [2024-11-02 23:47:15.123184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.177 [2024-11-02 23:47:15.125612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.177 [2024-11-02 23:47:15.125696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:21.177 BaseBdev2 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.177 [2024-11-02 23:47:15.134993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.177 [2024-11-02 23:47:15.137231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:21.177 [2024-11-02 23:47:15.137501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:21.177 [2024-11-02 23:47:15.137557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:21.177 [2024-11-02 23:47:15.137895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:21.177 [2024-11-02 23:47:15.138089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:21.177 [2024-11-02 23:47:15.138143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:21.177 [2024-11-02 23:47:15.138330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.177 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.178 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.178 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.178 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.178 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.178 "name": "raid_bdev1", 00:07:21.178 "uuid": "bb4e256e-c935-4fba-9c47-d470a59c9227", 00:07:21.178 "strip_size_kb": 0, 00:07:21.178 "state": "online", 00:07:21.178 "raid_level": "raid1", 00:07:21.178 "superblock": true, 00:07:21.178 "num_base_bdevs": 2, 00:07:21.178 "num_base_bdevs_discovered": 2, 00:07:21.178 "num_base_bdevs_operational": 2, 00:07:21.178 "base_bdevs_list": [ 00:07:21.178 { 00:07:21.178 "name": "BaseBdev1", 00:07:21.178 "uuid": "a90de85e-3f1c-5d86-ad64-13cfc4952888", 00:07:21.178 "is_configured": true, 00:07:21.178 "data_offset": 2048, 00:07:21.178 "data_size": 63488 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "name": "BaseBdev2", 00:07:21.178 "uuid": "afc9bc53-4440-5d33-838c-422d8e65ce4f", 00:07:21.178 "is_configured": true, 00:07:21.178 "data_offset": 2048, 00:07:21.178 "data_size": 63488 00:07:21.178 } 00:07:21.178 ] 00:07:21.178 }' 00:07:21.178 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.178 23:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.746 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:21.746 23:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:21.746 [2024-11-02 23:47:15.630735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.686 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.686 "name": "raid_bdev1", 00:07:22.686 "uuid": "bb4e256e-c935-4fba-9c47-d470a59c9227", 00:07:22.686 "strip_size_kb": 0, 00:07:22.686 "state": "online", 00:07:22.686 "raid_level": "raid1", 00:07:22.686 "superblock": true, 00:07:22.686 "num_base_bdevs": 2, 00:07:22.686 "num_base_bdevs_discovered": 2, 00:07:22.686 "num_base_bdevs_operational": 2, 00:07:22.686 "base_bdevs_list": [ 00:07:22.686 { 00:07:22.686 "name": "BaseBdev1", 00:07:22.686 "uuid": "a90de85e-3f1c-5d86-ad64-13cfc4952888", 00:07:22.686 "is_configured": true, 00:07:22.686 "data_offset": 2048, 00:07:22.686 "data_size": 63488 00:07:22.686 }, 00:07:22.686 { 00:07:22.686 "name": "BaseBdev2", 00:07:22.686 "uuid": "afc9bc53-4440-5d33-838c-422d8e65ce4f", 00:07:22.686 "is_configured": true, 00:07:22.687 "data_offset": 2048, 00:07:22.687 "data_size": 63488 00:07:22.687 } 00:07:22.687 ] 00:07:22.687 }' 00:07:22.687 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.687 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.947 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:22.947 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.947 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.947 [2024-11-02 23:47:16.993438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.947 [2024-11-02 23:47:16.993492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.947 [2024-11-02 23:47:16.996113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.947 [2024-11-02 23:47:16.996169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.947 [2024-11-02 23:47:16.996283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.947 [2024-11-02 23:47:16.996296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:22.947 { 00:07:22.947 "results": [ 00:07:22.947 { 00:07:22.947 "job": "raid_bdev1", 00:07:22.947 "core_mask": "0x1", 00:07:22.947 "workload": "randrw", 00:07:22.947 "percentage": 50, 00:07:22.947 "status": "finished", 00:07:22.947 "queue_depth": 1, 00:07:22.947 "io_size": 131072, 00:07:22.947 "runtime": 1.363183, 00:07:22.947 "iops": 14950.303811007032, 00:07:22.947 "mibps": 1868.787976375879, 00:07:22.947 "io_failed": 0, 00:07:22.947 "io_timeout": 0, 00:07:22.947 "avg_latency_us": 64.19401502457671, 00:07:22.947 "min_latency_us": 22.69344978165939, 00:07:22.947 "max_latency_us": 1488.1537117903931 00:07:22.947 } 00:07:22.947 ], 00:07:22.947 "core_count": 1 00:07:22.947 } 00:07:22.947 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.947 23:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74585 00:07:22.947 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 74585 ']' 00:07:22.947 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 74585 00:07:22.947 23:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:07:22.947 23:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:22.947 23:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74585 00:07:23.206 23:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:23.206 23:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:23.206 killing process with pid 74585 00:07:23.206 23:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74585' 00:07:23.206 23:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 74585 00:07:23.206 [2024-11-02 23:47:17.045761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.206 23:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 74585 00:07:23.206 [2024-11-02 23:47:17.076022] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.467 23:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:23.467 23:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5LLOlrJhFU 00:07:23.467 23:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:23.467 23:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:23.467 23:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:23.467 ************************************ 00:07:23.467 END TEST raid_read_error_test 00:07:23.467 ************************************ 00:07:23.467 23:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.467 23:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:23.467 23:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:23.467 00:07:23.467 real 0m3.321s 00:07:23.467 user 0m4.073s 00:07:23.467 sys 0m0.600s 00:07:23.467 23:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:23.467 23:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.467 23:47:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:23.467 23:47:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:23.467 23:47:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.467 23:47:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.467 ************************************ 00:07:23.467 START TEST raid_write_error_test 00:07:23.467 ************************************ 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HR6nFTR4gw 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74714 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74714 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 74714 ']' 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.467 23:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.728 [2024-11-02 23:47:17.581254] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:23.728 [2024-11-02 23:47:17.581392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74714 ] 00:07:23.728 [2024-11-02 23:47:17.736859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.728 [2024-11-02 23:47:17.776483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.988 [2024-11-02 23:47:17.853702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.988 [2024-11-02 23:47:17.853773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.558 BaseBdev1_malloc 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.558 true 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.558 [2024-11-02 23:47:18.454061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:24.558 [2024-11-02 23:47:18.454148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.558 [2024-11-02 23:47:18.454184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:24.558 [2024-11-02 23:47:18.454203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.558 [2024-11-02 23:47:18.456659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.558 [2024-11-02 23:47:18.456704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:24.558 BaseBdev1 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.558 BaseBdev2_malloc 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.558 true 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.558 [2024-11-02 23:47:18.500599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:24.558 [2024-11-02 23:47:18.500662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.558 [2024-11-02 23:47:18.500684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:24.558 [2024-11-02 23:47:18.500706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.558 [2024-11-02 23:47:18.503176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.558 [2024-11-02 23:47:18.503219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:24.558 BaseBdev2 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.558 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.558 [2024-11-02 23:47:18.512662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.559 [2024-11-02 23:47:18.514887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.559 [2024-11-02 23:47:18.515095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:24.559 [2024-11-02 23:47:18.515115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:24.559 [2024-11-02 23:47:18.515414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:24.559 [2024-11-02 23:47:18.515572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:24.559 [2024-11-02 23:47:18.515596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:24.559 [2024-11-02 23:47:18.515744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.559 "name": "raid_bdev1", 00:07:24.559 "uuid": "8a4fde68-a056-4075-9bb2-b911d4e0e0b5", 00:07:24.559 "strip_size_kb": 0, 00:07:24.559 "state": "online", 00:07:24.559 "raid_level": "raid1", 00:07:24.559 "superblock": true, 00:07:24.559 "num_base_bdevs": 2, 00:07:24.559 "num_base_bdevs_discovered": 2, 00:07:24.559 "num_base_bdevs_operational": 2, 00:07:24.559 "base_bdevs_list": [ 00:07:24.559 { 00:07:24.559 "name": "BaseBdev1", 00:07:24.559 "uuid": "191ddb2a-7ab7-544b-b5fa-b8b158597cb2", 00:07:24.559 "is_configured": true, 00:07:24.559 "data_offset": 2048, 00:07:24.559 "data_size": 63488 00:07:24.559 }, 00:07:24.559 { 00:07:24.559 "name": "BaseBdev2", 00:07:24.559 "uuid": "9d6ea22b-b4fc-53fc-a19b-89e72479c360", 00:07:24.559 "is_configured": true, 00:07:24.559 "data_offset": 2048, 00:07:24.559 "data_size": 63488 00:07:24.559 } 00:07:24.559 ] 00:07:24.559 }' 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.559 23:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.818 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:24.818 23:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:25.078 [2024-11-02 23:47:18.996392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.039 [2024-11-02 23:47:19.915385] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:26.039 [2024-11-02 23:47:19.915469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.039 [2024-11-02 23:47:19.915700] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.039 "name": "raid_bdev1", 00:07:26.039 "uuid": "8a4fde68-a056-4075-9bb2-b911d4e0e0b5", 00:07:26.039 "strip_size_kb": 0, 00:07:26.039 "state": "online", 00:07:26.039 "raid_level": "raid1", 00:07:26.039 "superblock": true, 00:07:26.039 "num_base_bdevs": 2, 00:07:26.039 "num_base_bdevs_discovered": 1, 00:07:26.039 "num_base_bdevs_operational": 1, 00:07:26.039 "base_bdevs_list": [ 00:07:26.039 { 00:07:26.039 "name": null, 00:07:26.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.039 "is_configured": false, 00:07:26.039 "data_offset": 0, 00:07:26.039 "data_size": 63488 00:07:26.039 }, 00:07:26.039 { 00:07:26.039 "name": "BaseBdev2", 00:07:26.039 "uuid": "9d6ea22b-b4fc-53fc-a19b-89e72479c360", 00:07:26.039 "is_configured": true, 00:07:26.039 "data_offset": 2048, 00:07:26.039 "data_size": 63488 00:07:26.039 } 00:07:26.039 ] 00:07:26.039 }' 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.039 23:47:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.299 23:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:26.299 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.299 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.299 [2024-11-02 23:47:20.364170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:26.299 [2024-11-02 23:47:20.364226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.299 [2024-11-02 23:47:20.366770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.299 [2024-11-02 23:47:20.366849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.299 [2024-11-02 23:47:20.366915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.299 [2024-11-02 23:47:20.366934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:26.299 { 00:07:26.299 "results": [ 00:07:26.299 { 00:07:26.299 "job": "raid_bdev1", 00:07:26.299 "core_mask": "0x1", 00:07:26.299 "workload": "randrw", 00:07:26.299 "percentage": 50, 00:07:26.299 "status": "finished", 00:07:26.299 "queue_depth": 1, 00:07:26.299 "io_size": 131072, 00:07:26.299 "runtime": 1.368224, 00:07:26.299 "iops": 18611.718549009518, 00:07:26.299 "mibps": 2326.4648186261898, 00:07:26.299 "io_failed": 0, 00:07:26.299 "io_timeout": 0, 00:07:26.299 "avg_latency_us": 51.00055522735632, 00:07:26.299 "min_latency_us": 22.69344978165939, 00:07:26.299 "max_latency_us": 1309.2890829694322 00:07:26.299 } 00:07:26.299 ], 00:07:26.299 "core_count": 1 00:07:26.299 } 00:07:26.299 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.299 23:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74714 00:07:26.299 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 74714 ']' 00:07:26.299 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 74714 00:07:26.299 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:07:26.299 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:26.299 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74714 00:07:26.558 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:26.558 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:26.558 killing process with pid 74714 00:07:26.558 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74714' 00:07:26.558 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 74714 00:07:26.558 [2024-11-02 23:47:20.408394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.558 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 74714 00:07:26.558 [2024-11-02 23:47:20.437065] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.818 23:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HR6nFTR4gw 00:07:26.818 23:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:26.818 23:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:26.818 23:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:26.818 23:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:26.818 23:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.818 23:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:26.818 23:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:26.818 00:07:26.818 real 0m3.290s 00:07:26.818 user 0m4.036s 00:07:26.818 sys 0m0.594s 00:07:26.818 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.818 23:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.818 ************************************ 00:07:26.818 END TEST raid_write_error_test 00:07:26.818 ************************************ 00:07:26.818 23:47:20 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:26.818 23:47:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:26.818 23:47:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:26.818 23:47:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:26.818 23:47:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.818 23:47:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.818 ************************************ 00:07:26.818 START TEST raid_state_function_test 00:07:26.818 ************************************ 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74841 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74841' 00:07:26.818 Process raid pid: 74841 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74841 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 74841 ']' 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.818 23:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.077 [2024-11-02 23:47:20.927667] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:27.077 [2024-11-02 23:47:20.927826] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.077 [2024-11-02 23:47:21.084418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.077 [2024-11-02 23:47:21.123168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.336 [2024-11-02 23:47:21.200540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.336 [2024-11-02 23:47:21.200590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.905 [2024-11-02 23:47:21.797467] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.905 [2024-11-02 23:47:21.797545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.905 [2024-11-02 23:47:21.797557] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.905 [2024-11-02 23:47:21.797570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.905 [2024-11-02 23:47:21.797578] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:27.905 [2024-11-02 23:47:21.797593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.905 "name": "Existed_Raid", 00:07:27.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.905 "strip_size_kb": 64, 00:07:27.905 "state": "configuring", 00:07:27.905 "raid_level": "raid0", 00:07:27.905 "superblock": false, 00:07:27.905 "num_base_bdevs": 3, 00:07:27.905 "num_base_bdevs_discovered": 0, 00:07:27.905 "num_base_bdevs_operational": 3, 00:07:27.905 "base_bdevs_list": [ 00:07:27.905 { 00:07:27.905 "name": "BaseBdev1", 00:07:27.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.905 "is_configured": false, 00:07:27.905 "data_offset": 0, 00:07:27.905 "data_size": 0 00:07:27.905 }, 00:07:27.905 { 00:07:27.905 "name": "BaseBdev2", 00:07:27.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.905 "is_configured": false, 00:07:27.905 "data_offset": 0, 00:07:27.905 "data_size": 0 00:07:27.905 }, 00:07:27.905 { 00:07:27.905 "name": "BaseBdev3", 00:07:27.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.905 "is_configured": false, 00:07:27.905 "data_offset": 0, 00:07:27.905 "data_size": 0 00:07:27.905 } 00:07:27.905 ] 00:07:27.905 }' 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.905 23:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.164 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.164 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.164 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.164 [2024-11-02 23:47:22.228675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.164 [2024-11-02 23:47:22.228732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:28.164 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.164 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:28.164 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.164 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.164 [2024-11-02 23:47:22.240654] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.164 [2024-11-02 23:47:22.240708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.164 [2024-11-02 23:47:22.240719] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.164 [2024-11-02 23:47:22.240731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.164 [2024-11-02 23:47:22.240763] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:28.164 [2024-11-02 23:47:22.240776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:28.164 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.164 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:28.164 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.165 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.424 [2024-11-02 23:47:22.268727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.424 BaseBdev1 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.424 [ 00:07:28.424 { 00:07:28.424 "name": "BaseBdev1", 00:07:28.424 "aliases": [ 00:07:28.424 "fcc7e0dd-e2d7-40ec-ba5b-b5e9fe65ec43" 00:07:28.424 ], 00:07:28.424 "product_name": "Malloc disk", 00:07:28.424 "block_size": 512, 00:07:28.424 "num_blocks": 65536, 00:07:28.424 "uuid": "fcc7e0dd-e2d7-40ec-ba5b-b5e9fe65ec43", 00:07:28.424 "assigned_rate_limits": { 00:07:28.424 "rw_ios_per_sec": 0, 00:07:28.424 "rw_mbytes_per_sec": 0, 00:07:28.424 "r_mbytes_per_sec": 0, 00:07:28.424 "w_mbytes_per_sec": 0 00:07:28.424 }, 00:07:28.424 "claimed": true, 00:07:28.424 "claim_type": "exclusive_write", 00:07:28.424 "zoned": false, 00:07:28.424 "supported_io_types": { 00:07:28.424 "read": true, 00:07:28.424 "write": true, 00:07:28.424 "unmap": true, 00:07:28.424 "flush": true, 00:07:28.424 "reset": true, 00:07:28.424 "nvme_admin": false, 00:07:28.424 "nvme_io": false, 00:07:28.424 "nvme_io_md": false, 00:07:28.424 "write_zeroes": true, 00:07:28.424 "zcopy": true, 00:07:28.424 "get_zone_info": false, 00:07:28.424 "zone_management": false, 00:07:28.424 "zone_append": false, 00:07:28.424 "compare": false, 00:07:28.424 "compare_and_write": false, 00:07:28.424 "abort": true, 00:07:28.424 "seek_hole": false, 00:07:28.424 "seek_data": false, 00:07:28.424 "copy": true, 00:07:28.424 "nvme_iov_md": false 00:07:28.424 }, 00:07:28.424 "memory_domains": [ 00:07:28.424 { 00:07:28.424 "dma_device_id": "system", 00:07:28.424 "dma_device_type": 1 00:07:28.424 }, 00:07:28.424 { 00:07:28.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.424 "dma_device_type": 2 00:07:28.424 } 00:07:28.424 ], 00:07:28.424 "driver_specific": {} 00:07:28.424 } 00:07:28.424 ] 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.424 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.424 "name": "Existed_Raid", 00:07:28.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.424 "strip_size_kb": 64, 00:07:28.424 "state": "configuring", 00:07:28.424 "raid_level": "raid0", 00:07:28.424 "superblock": false, 00:07:28.424 "num_base_bdevs": 3, 00:07:28.424 "num_base_bdevs_discovered": 1, 00:07:28.424 "num_base_bdevs_operational": 3, 00:07:28.424 "base_bdevs_list": [ 00:07:28.424 { 00:07:28.424 "name": "BaseBdev1", 00:07:28.424 "uuid": "fcc7e0dd-e2d7-40ec-ba5b-b5e9fe65ec43", 00:07:28.424 "is_configured": true, 00:07:28.424 "data_offset": 0, 00:07:28.424 "data_size": 65536 00:07:28.424 }, 00:07:28.424 { 00:07:28.424 "name": "BaseBdev2", 00:07:28.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.424 "is_configured": false, 00:07:28.424 "data_offset": 0, 00:07:28.424 "data_size": 0 00:07:28.424 }, 00:07:28.424 { 00:07:28.424 "name": "BaseBdev3", 00:07:28.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.424 "is_configured": false, 00:07:28.424 "data_offset": 0, 00:07:28.424 "data_size": 0 00:07:28.424 } 00:07:28.424 ] 00:07:28.424 }' 00:07:28.425 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.425 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.684 [2024-11-02 23:47:22.736043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.684 [2024-11-02 23:47:22.736130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.684 [2024-11-02 23:47:22.748019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.684 [2024-11-02 23:47:22.750348] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.684 [2024-11-02 23:47:22.750407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.684 [2024-11-02 23:47:22.750420] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:28.684 [2024-11-02 23:47:22.750449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.684 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.945 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.945 "name": "Existed_Raid", 00:07:28.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.945 "strip_size_kb": 64, 00:07:28.945 "state": "configuring", 00:07:28.945 "raid_level": "raid0", 00:07:28.945 "superblock": false, 00:07:28.945 "num_base_bdevs": 3, 00:07:28.945 "num_base_bdevs_discovered": 1, 00:07:28.945 "num_base_bdevs_operational": 3, 00:07:28.945 "base_bdevs_list": [ 00:07:28.945 { 00:07:28.945 "name": "BaseBdev1", 00:07:28.945 "uuid": "fcc7e0dd-e2d7-40ec-ba5b-b5e9fe65ec43", 00:07:28.945 "is_configured": true, 00:07:28.945 "data_offset": 0, 00:07:28.945 "data_size": 65536 00:07:28.945 }, 00:07:28.945 { 00:07:28.945 "name": "BaseBdev2", 00:07:28.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.945 "is_configured": false, 00:07:28.945 "data_offset": 0, 00:07:28.945 "data_size": 0 00:07:28.945 }, 00:07:28.945 { 00:07:28.945 "name": "BaseBdev3", 00:07:28.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.945 "is_configured": false, 00:07:28.945 "data_offset": 0, 00:07:28.945 "data_size": 0 00:07:28.945 } 00:07:28.945 ] 00:07:28.945 }' 00:07:28.945 23:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.945 23:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 [2024-11-02 23:47:23.180281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.205 BaseBdev2 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 [ 00:07:29.205 { 00:07:29.205 "name": "BaseBdev2", 00:07:29.205 "aliases": [ 00:07:29.205 "bf0546fe-68dc-499f-a3ed-e9c4571ada69" 00:07:29.205 ], 00:07:29.205 "product_name": "Malloc disk", 00:07:29.205 "block_size": 512, 00:07:29.205 "num_blocks": 65536, 00:07:29.205 "uuid": "bf0546fe-68dc-499f-a3ed-e9c4571ada69", 00:07:29.205 "assigned_rate_limits": { 00:07:29.205 "rw_ios_per_sec": 0, 00:07:29.205 "rw_mbytes_per_sec": 0, 00:07:29.205 "r_mbytes_per_sec": 0, 00:07:29.205 "w_mbytes_per_sec": 0 00:07:29.205 }, 00:07:29.205 "claimed": true, 00:07:29.205 "claim_type": "exclusive_write", 00:07:29.205 "zoned": false, 00:07:29.205 "supported_io_types": { 00:07:29.205 "read": true, 00:07:29.205 "write": true, 00:07:29.205 "unmap": true, 00:07:29.205 "flush": true, 00:07:29.205 "reset": true, 00:07:29.205 "nvme_admin": false, 00:07:29.205 "nvme_io": false, 00:07:29.205 "nvme_io_md": false, 00:07:29.205 "write_zeroes": true, 00:07:29.205 "zcopy": true, 00:07:29.205 "get_zone_info": false, 00:07:29.205 "zone_management": false, 00:07:29.205 "zone_append": false, 00:07:29.205 "compare": false, 00:07:29.205 "compare_and_write": false, 00:07:29.205 "abort": true, 00:07:29.205 "seek_hole": false, 00:07:29.205 "seek_data": false, 00:07:29.205 "copy": true, 00:07:29.205 "nvme_iov_md": false 00:07:29.205 }, 00:07:29.205 "memory_domains": [ 00:07:29.205 { 00:07:29.205 "dma_device_id": "system", 00:07:29.205 "dma_device_type": 1 00:07:29.205 }, 00:07:29.205 { 00:07:29.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.205 "dma_device_type": 2 00:07:29.205 } 00:07:29.205 ], 00:07:29.205 "driver_specific": {} 00:07:29.205 } 00:07:29.205 ] 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.205 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.205 "name": "Existed_Raid", 00:07:29.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.205 "strip_size_kb": 64, 00:07:29.205 "state": "configuring", 00:07:29.205 "raid_level": "raid0", 00:07:29.205 "superblock": false, 00:07:29.205 "num_base_bdevs": 3, 00:07:29.205 "num_base_bdevs_discovered": 2, 00:07:29.205 "num_base_bdevs_operational": 3, 00:07:29.205 "base_bdevs_list": [ 00:07:29.205 { 00:07:29.205 "name": "BaseBdev1", 00:07:29.205 "uuid": "fcc7e0dd-e2d7-40ec-ba5b-b5e9fe65ec43", 00:07:29.205 "is_configured": true, 00:07:29.205 "data_offset": 0, 00:07:29.205 "data_size": 65536 00:07:29.206 }, 00:07:29.206 { 00:07:29.206 "name": "BaseBdev2", 00:07:29.206 "uuid": "bf0546fe-68dc-499f-a3ed-e9c4571ada69", 00:07:29.206 "is_configured": true, 00:07:29.206 "data_offset": 0, 00:07:29.206 "data_size": 65536 00:07:29.206 }, 00:07:29.206 { 00:07:29.206 "name": "BaseBdev3", 00:07:29.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.206 "is_configured": false, 00:07:29.206 "data_offset": 0, 00:07:29.206 "data_size": 0 00:07:29.206 } 00:07:29.206 ] 00:07:29.206 }' 00:07:29.206 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.206 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.775 [2024-11-02 23:47:23.706989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:29.775 [2024-11-02 23:47:23.707049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:29.775 [2024-11-02 23:47:23.707064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:29.775 [2024-11-02 23:47:23.707470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:29.775 [2024-11-02 23:47:23.707679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:29.775 [2024-11-02 23:47:23.707703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:29.775 [2024-11-02 23:47:23.708020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.775 BaseBdev3 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.775 [ 00:07:29.775 { 00:07:29.775 "name": "BaseBdev3", 00:07:29.775 "aliases": [ 00:07:29.775 "9759d026-4f91-4cc6-9539-60a21ca962fe" 00:07:29.775 ], 00:07:29.775 "product_name": "Malloc disk", 00:07:29.775 "block_size": 512, 00:07:29.775 "num_blocks": 65536, 00:07:29.775 "uuid": "9759d026-4f91-4cc6-9539-60a21ca962fe", 00:07:29.775 "assigned_rate_limits": { 00:07:29.775 "rw_ios_per_sec": 0, 00:07:29.775 "rw_mbytes_per_sec": 0, 00:07:29.775 "r_mbytes_per_sec": 0, 00:07:29.775 "w_mbytes_per_sec": 0 00:07:29.775 }, 00:07:29.775 "claimed": true, 00:07:29.775 "claim_type": "exclusive_write", 00:07:29.775 "zoned": false, 00:07:29.775 "supported_io_types": { 00:07:29.775 "read": true, 00:07:29.775 "write": true, 00:07:29.775 "unmap": true, 00:07:29.775 "flush": true, 00:07:29.775 "reset": true, 00:07:29.775 "nvme_admin": false, 00:07:29.775 "nvme_io": false, 00:07:29.775 "nvme_io_md": false, 00:07:29.775 "write_zeroes": true, 00:07:29.775 "zcopy": true, 00:07:29.775 "get_zone_info": false, 00:07:29.775 "zone_management": false, 00:07:29.775 "zone_append": false, 00:07:29.775 "compare": false, 00:07:29.775 "compare_and_write": false, 00:07:29.775 "abort": true, 00:07:29.775 "seek_hole": false, 00:07:29.775 "seek_data": false, 00:07:29.775 "copy": true, 00:07:29.775 "nvme_iov_md": false 00:07:29.775 }, 00:07:29.775 "memory_domains": [ 00:07:29.775 { 00:07:29.775 "dma_device_id": "system", 00:07:29.775 "dma_device_type": 1 00:07:29.775 }, 00:07:29.775 { 00:07:29.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.775 "dma_device_type": 2 00:07:29.775 } 00:07:29.775 ], 00:07:29.775 "driver_specific": {} 00:07:29.775 } 00:07:29.775 ] 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.775 "name": "Existed_Raid", 00:07:29.775 "uuid": "540238e8-de10-423b-b4e5-c000abab479a", 00:07:29.775 "strip_size_kb": 64, 00:07:29.775 "state": "online", 00:07:29.775 "raid_level": "raid0", 00:07:29.775 "superblock": false, 00:07:29.775 "num_base_bdevs": 3, 00:07:29.775 "num_base_bdevs_discovered": 3, 00:07:29.775 "num_base_bdevs_operational": 3, 00:07:29.775 "base_bdevs_list": [ 00:07:29.775 { 00:07:29.775 "name": "BaseBdev1", 00:07:29.775 "uuid": "fcc7e0dd-e2d7-40ec-ba5b-b5e9fe65ec43", 00:07:29.775 "is_configured": true, 00:07:29.775 "data_offset": 0, 00:07:29.775 "data_size": 65536 00:07:29.775 }, 00:07:29.775 { 00:07:29.775 "name": "BaseBdev2", 00:07:29.775 "uuid": "bf0546fe-68dc-499f-a3ed-e9c4571ada69", 00:07:29.775 "is_configured": true, 00:07:29.775 "data_offset": 0, 00:07:29.775 "data_size": 65536 00:07:29.775 }, 00:07:29.775 { 00:07:29.775 "name": "BaseBdev3", 00:07:29.775 "uuid": "9759d026-4f91-4cc6-9539-60a21ca962fe", 00:07:29.775 "is_configured": true, 00:07:29.775 "data_offset": 0, 00:07:29.775 "data_size": 65536 00:07:29.775 } 00:07:29.775 ] 00:07:29.775 }' 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.775 23:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.035 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:30.035 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:30.035 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.035 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.035 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.035 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.035 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.035 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:30.035 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.035 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.035 [2024-11-02 23:47:24.122826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.295 "name": "Existed_Raid", 00:07:30.295 "aliases": [ 00:07:30.295 "540238e8-de10-423b-b4e5-c000abab479a" 00:07:30.295 ], 00:07:30.295 "product_name": "Raid Volume", 00:07:30.295 "block_size": 512, 00:07:30.295 "num_blocks": 196608, 00:07:30.295 "uuid": "540238e8-de10-423b-b4e5-c000abab479a", 00:07:30.295 "assigned_rate_limits": { 00:07:30.295 "rw_ios_per_sec": 0, 00:07:30.295 "rw_mbytes_per_sec": 0, 00:07:30.295 "r_mbytes_per_sec": 0, 00:07:30.295 "w_mbytes_per_sec": 0 00:07:30.295 }, 00:07:30.295 "claimed": false, 00:07:30.295 "zoned": false, 00:07:30.295 "supported_io_types": { 00:07:30.295 "read": true, 00:07:30.295 "write": true, 00:07:30.295 "unmap": true, 00:07:30.295 "flush": true, 00:07:30.295 "reset": true, 00:07:30.295 "nvme_admin": false, 00:07:30.295 "nvme_io": false, 00:07:30.295 "nvme_io_md": false, 00:07:30.295 "write_zeroes": true, 00:07:30.295 "zcopy": false, 00:07:30.295 "get_zone_info": false, 00:07:30.295 "zone_management": false, 00:07:30.295 "zone_append": false, 00:07:30.295 "compare": false, 00:07:30.295 "compare_and_write": false, 00:07:30.295 "abort": false, 00:07:30.295 "seek_hole": false, 00:07:30.295 "seek_data": false, 00:07:30.295 "copy": false, 00:07:30.295 "nvme_iov_md": false 00:07:30.295 }, 00:07:30.295 "memory_domains": [ 00:07:30.295 { 00:07:30.295 "dma_device_id": "system", 00:07:30.295 "dma_device_type": 1 00:07:30.295 }, 00:07:30.295 { 00:07:30.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.295 "dma_device_type": 2 00:07:30.295 }, 00:07:30.295 { 00:07:30.295 "dma_device_id": "system", 00:07:30.295 "dma_device_type": 1 00:07:30.295 }, 00:07:30.295 { 00:07:30.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.295 "dma_device_type": 2 00:07:30.295 }, 00:07:30.295 { 00:07:30.295 "dma_device_id": "system", 00:07:30.295 "dma_device_type": 1 00:07:30.295 }, 00:07:30.295 { 00:07:30.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.295 "dma_device_type": 2 00:07:30.295 } 00:07:30.295 ], 00:07:30.295 "driver_specific": { 00:07:30.295 "raid": { 00:07:30.295 "uuid": "540238e8-de10-423b-b4e5-c000abab479a", 00:07:30.295 "strip_size_kb": 64, 00:07:30.295 "state": "online", 00:07:30.295 "raid_level": "raid0", 00:07:30.295 "superblock": false, 00:07:30.295 "num_base_bdevs": 3, 00:07:30.295 "num_base_bdevs_discovered": 3, 00:07:30.295 "num_base_bdevs_operational": 3, 00:07:30.295 "base_bdevs_list": [ 00:07:30.295 { 00:07:30.295 "name": "BaseBdev1", 00:07:30.295 "uuid": "fcc7e0dd-e2d7-40ec-ba5b-b5e9fe65ec43", 00:07:30.295 "is_configured": true, 00:07:30.295 "data_offset": 0, 00:07:30.295 "data_size": 65536 00:07:30.295 }, 00:07:30.295 { 00:07:30.295 "name": "BaseBdev2", 00:07:30.295 "uuid": "bf0546fe-68dc-499f-a3ed-e9c4571ada69", 00:07:30.295 "is_configured": true, 00:07:30.295 "data_offset": 0, 00:07:30.295 "data_size": 65536 00:07:30.295 }, 00:07:30.295 { 00:07:30.295 "name": "BaseBdev3", 00:07:30.295 "uuid": "9759d026-4f91-4cc6-9539-60a21ca962fe", 00:07:30.295 "is_configured": true, 00:07:30.295 "data_offset": 0, 00:07:30.295 "data_size": 65536 00:07:30.295 } 00:07:30.295 ] 00:07:30.295 } 00:07:30.295 } 00:07:30.295 }' 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:30.295 BaseBdev2 00:07:30.295 BaseBdev3' 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.295 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.296 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.296 [2024-11-02 23:47:24.386015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:30.296 [2024-11-02 23:47:24.386058] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.296 [2024-11-02 23:47:24.386147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.555 "name": "Existed_Raid", 00:07:30.555 "uuid": "540238e8-de10-423b-b4e5-c000abab479a", 00:07:30.555 "strip_size_kb": 64, 00:07:30.555 "state": "offline", 00:07:30.555 "raid_level": "raid0", 00:07:30.555 "superblock": false, 00:07:30.555 "num_base_bdevs": 3, 00:07:30.555 "num_base_bdevs_discovered": 2, 00:07:30.555 "num_base_bdevs_operational": 2, 00:07:30.555 "base_bdevs_list": [ 00:07:30.555 { 00:07:30.555 "name": null, 00:07:30.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.555 "is_configured": false, 00:07:30.555 "data_offset": 0, 00:07:30.555 "data_size": 65536 00:07:30.555 }, 00:07:30.555 { 00:07:30.555 "name": "BaseBdev2", 00:07:30.555 "uuid": "bf0546fe-68dc-499f-a3ed-e9c4571ada69", 00:07:30.555 "is_configured": true, 00:07:30.555 "data_offset": 0, 00:07:30.555 "data_size": 65536 00:07:30.555 }, 00:07:30.555 { 00:07:30.555 "name": "BaseBdev3", 00:07:30.555 "uuid": "9759d026-4f91-4cc6-9539-60a21ca962fe", 00:07:30.555 "is_configured": true, 00:07:30.555 "data_offset": 0, 00:07:30.555 "data_size": 65536 00:07:30.555 } 00:07:30.555 ] 00:07:30.555 }' 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.555 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.815 [2024-11-02 23:47:24.878525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.815 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.075 [2024-11-02 23:47:24.954992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:31.075 [2024-11-02 23:47:24.955062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.075 23:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.075 BaseBdev2 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.075 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.075 [ 00:07:31.075 { 00:07:31.075 "name": "BaseBdev2", 00:07:31.075 "aliases": [ 00:07:31.075 "1b2c2ffe-2068-470c-b6c1-dc99015f4313" 00:07:31.075 ], 00:07:31.075 "product_name": "Malloc disk", 00:07:31.075 "block_size": 512, 00:07:31.075 "num_blocks": 65536, 00:07:31.075 "uuid": "1b2c2ffe-2068-470c-b6c1-dc99015f4313", 00:07:31.075 "assigned_rate_limits": { 00:07:31.075 "rw_ios_per_sec": 0, 00:07:31.075 "rw_mbytes_per_sec": 0, 00:07:31.075 "r_mbytes_per_sec": 0, 00:07:31.075 "w_mbytes_per_sec": 0 00:07:31.075 }, 00:07:31.075 "claimed": false, 00:07:31.075 "zoned": false, 00:07:31.075 "supported_io_types": { 00:07:31.075 "read": true, 00:07:31.075 "write": true, 00:07:31.075 "unmap": true, 00:07:31.075 "flush": true, 00:07:31.075 "reset": true, 00:07:31.075 "nvme_admin": false, 00:07:31.075 "nvme_io": false, 00:07:31.075 "nvme_io_md": false, 00:07:31.075 "write_zeroes": true, 00:07:31.075 "zcopy": true, 00:07:31.075 "get_zone_info": false, 00:07:31.075 "zone_management": false, 00:07:31.075 "zone_append": false, 00:07:31.075 "compare": false, 00:07:31.075 "compare_and_write": false, 00:07:31.075 "abort": true, 00:07:31.075 "seek_hole": false, 00:07:31.075 "seek_data": false, 00:07:31.075 "copy": true, 00:07:31.075 "nvme_iov_md": false 00:07:31.075 }, 00:07:31.075 "memory_domains": [ 00:07:31.076 { 00:07:31.076 "dma_device_id": "system", 00:07:31.076 "dma_device_type": 1 00:07:31.076 }, 00:07:31.076 { 00:07:31.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.076 "dma_device_type": 2 00:07:31.076 } 00:07:31.076 ], 00:07:31.076 "driver_specific": {} 00:07:31.076 } 00:07:31.076 ] 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.076 BaseBdev3 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.076 [ 00:07:31.076 { 00:07:31.076 "name": "BaseBdev3", 00:07:31.076 "aliases": [ 00:07:31.076 "83bad1c5-491e-4ffa-a63f-ab759e842cca" 00:07:31.076 ], 00:07:31.076 "product_name": "Malloc disk", 00:07:31.076 "block_size": 512, 00:07:31.076 "num_blocks": 65536, 00:07:31.076 "uuid": "83bad1c5-491e-4ffa-a63f-ab759e842cca", 00:07:31.076 "assigned_rate_limits": { 00:07:31.076 "rw_ios_per_sec": 0, 00:07:31.076 "rw_mbytes_per_sec": 0, 00:07:31.076 "r_mbytes_per_sec": 0, 00:07:31.076 "w_mbytes_per_sec": 0 00:07:31.076 }, 00:07:31.076 "claimed": false, 00:07:31.076 "zoned": false, 00:07:31.076 "supported_io_types": { 00:07:31.076 "read": true, 00:07:31.076 "write": true, 00:07:31.076 "unmap": true, 00:07:31.076 "flush": true, 00:07:31.076 "reset": true, 00:07:31.076 "nvme_admin": false, 00:07:31.076 "nvme_io": false, 00:07:31.076 "nvme_io_md": false, 00:07:31.076 "write_zeroes": true, 00:07:31.076 "zcopy": true, 00:07:31.076 "get_zone_info": false, 00:07:31.076 "zone_management": false, 00:07:31.076 "zone_append": false, 00:07:31.076 "compare": false, 00:07:31.076 "compare_and_write": false, 00:07:31.076 "abort": true, 00:07:31.076 "seek_hole": false, 00:07:31.076 "seek_data": false, 00:07:31.076 "copy": true, 00:07:31.076 "nvme_iov_md": false 00:07:31.076 }, 00:07:31.076 "memory_domains": [ 00:07:31.076 { 00:07:31.076 "dma_device_id": "system", 00:07:31.076 "dma_device_type": 1 00:07:31.076 }, 00:07:31.076 { 00:07:31.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.076 "dma_device_type": 2 00:07:31.076 } 00:07:31.076 ], 00:07:31.076 "driver_specific": {} 00:07:31.076 } 00:07:31.076 ] 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.076 [2024-11-02 23:47:25.147363] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.076 [2024-11-02 23:47:25.147427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.076 [2024-11-02 23:47:25.147452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:31.076 [2024-11-02 23:47:25.149611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.076 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.335 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.335 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.335 "name": "Existed_Raid", 00:07:31.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.335 "strip_size_kb": 64, 00:07:31.335 "state": "configuring", 00:07:31.335 "raid_level": "raid0", 00:07:31.335 "superblock": false, 00:07:31.335 "num_base_bdevs": 3, 00:07:31.335 "num_base_bdevs_discovered": 2, 00:07:31.335 "num_base_bdevs_operational": 3, 00:07:31.335 "base_bdevs_list": [ 00:07:31.335 { 00:07:31.335 "name": "BaseBdev1", 00:07:31.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.335 "is_configured": false, 00:07:31.335 "data_offset": 0, 00:07:31.335 "data_size": 0 00:07:31.335 }, 00:07:31.335 { 00:07:31.335 "name": "BaseBdev2", 00:07:31.335 "uuid": "1b2c2ffe-2068-470c-b6c1-dc99015f4313", 00:07:31.335 "is_configured": true, 00:07:31.335 "data_offset": 0, 00:07:31.335 "data_size": 65536 00:07:31.335 }, 00:07:31.335 { 00:07:31.335 "name": "BaseBdev3", 00:07:31.335 "uuid": "83bad1c5-491e-4ffa-a63f-ab759e842cca", 00:07:31.335 "is_configured": true, 00:07:31.335 "data_offset": 0, 00:07:31.335 "data_size": 65536 00:07:31.335 } 00:07:31.335 ] 00:07:31.335 }' 00:07:31.335 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.335 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.594 [2024-11-02 23:47:25.566794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.594 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.594 "name": "Existed_Raid", 00:07:31.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.594 "strip_size_kb": 64, 00:07:31.594 "state": "configuring", 00:07:31.594 "raid_level": "raid0", 00:07:31.594 "superblock": false, 00:07:31.594 "num_base_bdevs": 3, 00:07:31.594 "num_base_bdevs_discovered": 1, 00:07:31.594 "num_base_bdevs_operational": 3, 00:07:31.594 "base_bdevs_list": [ 00:07:31.594 { 00:07:31.594 "name": "BaseBdev1", 00:07:31.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.594 "is_configured": false, 00:07:31.594 "data_offset": 0, 00:07:31.594 "data_size": 0 00:07:31.594 }, 00:07:31.594 { 00:07:31.594 "name": null, 00:07:31.594 "uuid": "1b2c2ffe-2068-470c-b6c1-dc99015f4313", 00:07:31.594 "is_configured": false, 00:07:31.594 "data_offset": 0, 00:07:31.594 "data_size": 65536 00:07:31.594 }, 00:07:31.594 { 00:07:31.594 "name": "BaseBdev3", 00:07:31.594 "uuid": "83bad1c5-491e-4ffa-a63f-ab759e842cca", 00:07:31.594 "is_configured": true, 00:07:31.595 "data_offset": 0, 00:07:31.595 "data_size": 65536 00:07:31.595 } 00:07:31.595 ] 00:07:31.595 }' 00:07:31.595 23:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.595 23:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.160 [2024-11-02 23:47:26.130793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.160 BaseBdev1 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.160 [ 00:07:32.160 { 00:07:32.160 "name": "BaseBdev1", 00:07:32.160 "aliases": [ 00:07:32.160 "f481daeb-d651-4013-97bd-d5cf696fc220" 00:07:32.160 ], 00:07:32.160 "product_name": "Malloc disk", 00:07:32.160 "block_size": 512, 00:07:32.160 "num_blocks": 65536, 00:07:32.160 "uuid": "f481daeb-d651-4013-97bd-d5cf696fc220", 00:07:32.160 "assigned_rate_limits": { 00:07:32.160 "rw_ios_per_sec": 0, 00:07:32.160 "rw_mbytes_per_sec": 0, 00:07:32.160 "r_mbytes_per_sec": 0, 00:07:32.160 "w_mbytes_per_sec": 0 00:07:32.160 }, 00:07:32.160 "claimed": true, 00:07:32.160 "claim_type": "exclusive_write", 00:07:32.160 "zoned": false, 00:07:32.160 "supported_io_types": { 00:07:32.160 "read": true, 00:07:32.160 "write": true, 00:07:32.160 "unmap": true, 00:07:32.160 "flush": true, 00:07:32.160 "reset": true, 00:07:32.160 "nvme_admin": false, 00:07:32.160 "nvme_io": false, 00:07:32.160 "nvme_io_md": false, 00:07:32.160 "write_zeroes": true, 00:07:32.160 "zcopy": true, 00:07:32.160 "get_zone_info": false, 00:07:32.160 "zone_management": false, 00:07:32.160 "zone_append": false, 00:07:32.160 "compare": false, 00:07:32.160 "compare_and_write": false, 00:07:32.160 "abort": true, 00:07:32.160 "seek_hole": false, 00:07:32.160 "seek_data": false, 00:07:32.160 "copy": true, 00:07:32.160 "nvme_iov_md": false 00:07:32.160 }, 00:07:32.160 "memory_domains": [ 00:07:32.160 { 00:07:32.160 "dma_device_id": "system", 00:07:32.160 "dma_device_type": 1 00:07:32.160 }, 00:07:32.160 { 00:07:32.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.160 "dma_device_type": 2 00:07:32.160 } 00:07:32.160 ], 00:07:32.160 "driver_specific": {} 00:07:32.160 } 00:07:32.160 ] 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.160 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.160 "name": "Existed_Raid", 00:07:32.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.160 "strip_size_kb": 64, 00:07:32.160 "state": "configuring", 00:07:32.161 "raid_level": "raid0", 00:07:32.161 "superblock": false, 00:07:32.161 "num_base_bdevs": 3, 00:07:32.161 "num_base_bdevs_discovered": 2, 00:07:32.161 "num_base_bdevs_operational": 3, 00:07:32.161 "base_bdevs_list": [ 00:07:32.161 { 00:07:32.161 "name": "BaseBdev1", 00:07:32.161 "uuid": "f481daeb-d651-4013-97bd-d5cf696fc220", 00:07:32.161 "is_configured": true, 00:07:32.161 "data_offset": 0, 00:07:32.161 "data_size": 65536 00:07:32.161 }, 00:07:32.161 { 00:07:32.161 "name": null, 00:07:32.161 "uuid": "1b2c2ffe-2068-470c-b6c1-dc99015f4313", 00:07:32.161 "is_configured": false, 00:07:32.161 "data_offset": 0, 00:07:32.161 "data_size": 65536 00:07:32.161 }, 00:07:32.161 { 00:07:32.161 "name": "BaseBdev3", 00:07:32.161 "uuid": "83bad1c5-491e-4ffa-a63f-ab759e842cca", 00:07:32.161 "is_configured": true, 00:07:32.161 "data_offset": 0, 00:07:32.161 "data_size": 65536 00:07:32.161 } 00:07:32.161 ] 00:07:32.161 }' 00:07:32.161 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.161 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.730 [2024-11-02 23:47:26.685984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.730 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.731 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.731 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.731 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.731 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.731 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.731 "name": "Existed_Raid", 00:07:32.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.731 "strip_size_kb": 64, 00:07:32.731 "state": "configuring", 00:07:32.731 "raid_level": "raid0", 00:07:32.731 "superblock": false, 00:07:32.731 "num_base_bdevs": 3, 00:07:32.731 "num_base_bdevs_discovered": 1, 00:07:32.731 "num_base_bdevs_operational": 3, 00:07:32.731 "base_bdevs_list": [ 00:07:32.731 { 00:07:32.731 "name": "BaseBdev1", 00:07:32.731 "uuid": "f481daeb-d651-4013-97bd-d5cf696fc220", 00:07:32.731 "is_configured": true, 00:07:32.731 "data_offset": 0, 00:07:32.731 "data_size": 65536 00:07:32.731 }, 00:07:32.731 { 00:07:32.731 "name": null, 00:07:32.731 "uuid": "1b2c2ffe-2068-470c-b6c1-dc99015f4313", 00:07:32.731 "is_configured": false, 00:07:32.731 "data_offset": 0, 00:07:32.731 "data_size": 65536 00:07:32.731 }, 00:07:32.731 { 00:07:32.731 "name": null, 00:07:32.731 "uuid": "83bad1c5-491e-4ffa-a63f-ab759e842cca", 00:07:32.731 "is_configured": false, 00:07:32.731 "data_offset": 0, 00:07:32.731 "data_size": 65536 00:07:32.731 } 00:07:32.731 ] 00:07:32.731 }' 00:07:32.731 23:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.731 23:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.300 [2024-11-02 23:47:27.177076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.300 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.300 "name": "Existed_Raid", 00:07:33.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.300 "strip_size_kb": 64, 00:07:33.300 "state": "configuring", 00:07:33.300 "raid_level": "raid0", 00:07:33.300 "superblock": false, 00:07:33.300 "num_base_bdevs": 3, 00:07:33.300 "num_base_bdevs_discovered": 2, 00:07:33.300 "num_base_bdevs_operational": 3, 00:07:33.300 "base_bdevs_list": [ 00:07:33.300 { 00:07:33.300 "name": "BaseBdev1", 00:07:33.300 "uuid": "f481daeb-d651-4013-97bd-d5cf696fc220", 00:07:33.300 "is_configured": true, 00:07:33.300 "data_offset": 0, 00:07:33.300 "data_size": 65536 00:07:33.300 }, 00:07:33.300 { 00:07:33.300 "name": null, 00:07:33.300 "uuid": "1b2c2ffe-2068-470c-b6c1-dc99015f4313", 00:07:33.300 "is_configured": false, 00:07:33.300 "data_offset": 0, 00:07:33.300 "data_size": 65536 00:07:33.301 }, 00:07:33.301 { 00:07:33.301 "name": "BaseBdev3", 00:07:33.301 "uuid": "83bad1c5-491e-4ffa-a63f-ab759e842cca", 00:07:33.301 "is_configured": true, 00:07:33.301 "data_offset": 0, 00:07:33.301 "data_size": 65536 00:07:33.301 } 00:07:33.301 ] 00:07:33.301 }' 00:07:33.301 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.301 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.560 [2024-11-02 23:47:27.628341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.560 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.818 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.818 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.818 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.818 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.818 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.818 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.818 "name": "Existed_Raid", 00:07:33.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.818 "strip_size_kb": 64, 00:07:33.818 "state": "configuring", 00:07:33.818 "raid_level": "raid0", 00:07:33.818 "superblock": false, 00:07:33.818 "num_base_bdevs": 3, 00:07:33.818 "num_base_bdevs_discovered": 1, 00:07:33.818 "num_base_bdevs_operational": 3, 00:07:33.818 "base_bdevs_list": [ 00:07:33.818 { 00:07:33.818 "name": null, 00:07:33.818 "uuid": "f481daeb-d651-4013-97bd-d5cf696fc220", 00:07:33.818 "is_configured": false, 00:07:33.818 "data_offset": 0, 00:07:33.818 "data_size": 65536 00:07:33.818 }, 00:07:33.818 { 00:07:33.818 "name": null, 00:07:33.818 "uuid": "1b2c2ffe-2068-470c-b6c1-dc99015f4313", 00:07:33.818 "is_configured": false, 00:07:33.818 "data_offset": 0, 00:07:33.818 "data_size": 65536 00:07:33.818 }, 00:07:33.818 { 00:07:33.818 "name": "BaseBdev3", 00:07:33.818 "uuid": "83bad1c5-491e-4ffa-a63f-ab759e842cca", 00:07:33.818 "is_configured": true, 00:07:33.818 "data_offset": 0, 00:07:33.818 "data_size": 65536 00:07:33.818 } 00:07:33.818 ] 00:07:33.819 }' 00:07:33.819 23:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.819 23:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.077 [2024-11-02 23:47:28.143617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.077 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.335 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.335 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.335 "name": "Existed_Raid", 00:07:34.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.335 "strip_size_kb": 64, 00:07:34.335 "state": "configuring", 00:07:34.335 "raid_level": "raid0", 00:07:34.335 "superblock": false, 00:07:34.335 "num_base_bdevs": 3, 00:07:34.335 "num_base_bdevs_discovered": 2, 00:07:34.335 "num_base_bdevs_operational": 3, 00:07:34.335 "base_bdevs_list": [ 00:07:34.335 { 00:07:34.335 "name": null, 00:07:34.335 "uuid": "f481daeb-d651-4013-97bd-d5cf696fc220", 00:07:34.335 "is_configured": false, 00:07:34.335 "data_offset": 0, 00:07:34.335 "data_size": 65536 00:07:34.335 }, 00:07:34.335 { 00:07:34.335 "name": "BaseBdev2", 00:07:34.335 "uuid": "1b2c2ffe-2068-470c-b6c1-dc99015f4313", 00:07:34.335 "is_configured": true, 00:07:34.335 "data_offset": 0, 00:07:34.335 "data_size": 65536 00:07:34.335 }, 00:07:34.335 { 00:07:34.335 "name": "BaseBdev3", 00:07:34.335 "uuid": "83bad1c5-491e-4ffa-a63f-ab759e842cca", 00:07:34.335 "is_configured": true, 00:07:34.335 "data_offset": 0, 00:07:34.335 "data_size": 65536 00:07:34.335 } 00:07:34.335 ] 00:07:34.335 }' 00:07:34.335 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.335 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f481daeb-d651-4013-97bd-d5cf696fc220 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.592 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.592 [2024-11-02 23:47:28.683504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:34.592 [2024-11-02 23:47:28.683565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:34.592 [2024-11-02 23:47:28.683597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:34.592 [2024-11-02 23:47:28.683897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:34.592 [2024-11-02 23:47:28.684051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:34.592 [2024-11-02 23:47:28.684070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:07:34.592 [2024-11-02 23:47:28.684290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.593 NewBaseBdev 00:07:34.593 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.593 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.852 [ 00:07:34.852 { 00:07:34.852 "name": "NewBaseBdev", 00:07:34.852 "aliases": [ 00:07:34.852 "f481daeb-d651-4013-97bd-d5cf696fc220" 00:07:34.852 ], 00:07:34.852 "product_name": "Malloc disk", 00:07:34.852 "block_size": 512, 00:07:34.852 "num_blocks": 65536, 00:07:34.852 "uuid": "f481daeb-d651-4013-97bd-d5cf696fc220", 00:07:34.852 "assigned_rate_limits": { 00:07:34.852 "rw_ios_per_sec": 0, 00:07:34.852 "rw_mbytes_per_sec": 0, 00:07:34.852 "r_mbytes_per_sec": 0, 00:07:34.852 "w_mbytes_per_sec": 0 00:07:34.852 }, 00:07:34.852 "claimed": true, 00:07:34.852 "claim_type": "exclusive_write", 00:07:34.852 "zoned": false, 00:07:34.852 "supported_io_types": { 00:07:34.852 "read": true, 00:07:34.852 "write": true, 00:07:34.852 "unmap": true, 00:07:34.852 "flush": true, 00:07:34.852 "reset": true, 00:07:34.852 "nvme_admin": false, 00:07:34.852 "nvme_io": false, 00:07:34.852 "nvme_io_md": false, 00:07:34.852 "write_zeroes": true, 00:07:34.852 "zcopy": true, 00:07:34.852 "get_zone_info": false, 00:07:34.852 "zone_management": false, 00:07:34.852 "zone_append": false, 00:07:34.852 "compare": false, 00:07:34.852 "compare_and_write": false, 00:07:34.852 "abort": true, 00:07:34.852 "seek_hole": false, 00:07:34.852 "seek_data": false, 00:07:34.852 "copy": true, 00:07:34.852 "nvme_iov_md": false 00:07:34.852 }, 00:07:34.852 "memory_domains": [ 00:07:34.852 { 00:07:34.852 "dma_device_id": "system", 00:07:34.852 "dma_device_type": 1 00:07:34.852 }, 00:07:34.852 { 00:07:34.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.852 "dma_device_type": 2 00:07:34.852 } 00:07:34.852 ], 00:07:34.852 "driver_specific": {} 00:07:34.852 } 00:07:34.852 ] 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.852 "name": "Existed_Raid", 00:07:34.852 "uuid": "1973d28d-42d1-41aa-9ee3-7dbd2ab44f90", 00:07:34.852 "strip_size_kb": 64, 00:07:34.852 "state": "online", 00:07:34.852 "raid_level": "raid0", 00:07:34.852 "superblock": false, 00:07:34.852 "num_base_bdevs": 3, 00:07:34.852 "num_base_bdevs_discovered": 3, 00:07:34.852 "num_base_bdevs_operational": 3, 00:07:34.852 "base_bdevs_list": [ 00:07:34.852 { 00:07:34.852 "name": "NewBaseBdev", 00:07:34.852 "uuid": "f481daeb-d651-4013-97bd-d5cf696fc220", 00:07:34.852 "is_configured": true, 00:07:34.852 "data_offset": 0, 00:07:34.852 "data_size": 65536 00:07:34.852 }, 00:07:34.852 { 00:07:34.852 "name": "BaseBdev2", 00:07:34.852 "uuid": "1b2c2ffe-2068-470c-b6c1-dc99015f4313", 00:07:34.852 "is_configured": true, 00:07:34.852 "data_offset": 0, 00:07:34.852 "data_size": 65536 00:07:34.852 }, 00:07:34.852 { 00:07:34.852 "name": "BaseBdev3", 00:07:34.852 "uuid": "83bad1c5-491e-4ffa-a63f-ab759e842cca", 00:07:34.852 "is_configured": true, 00:07:34.852 "data_offset": 0, 00:07:34.852 "data_size": 65536 00:07:34.852 } 00:07:34.852 ] 00:07:34.852 }' 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.852 23:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.112 [2024-11-02 23:47:29.131211] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:35.112 "name": "Existed_Raid", 00:07:35.112 "aliases": [ 00:07:35.112 "1973d28d-42d1-41aa-9ee3-7dbd2ab44f90" 00:07:35.112 ], 00:07:35.112 "product_name": "Raid Volume", 00:07:35.112 "block_size": 512, 00:07:35.112 "num_blocks": 196608, 00:07:35.112 "uuid": "1973d28d-42d1-41aa-9ee3-7dbd2ab44f90", 00:07:35.112 "assigned_rate_limits": { 00:07:35.112 "rw_ios_per_sec": 0, 00:07:35.112 "rw_mbytes_per_sec": 0, 00:07:35.112 "r_mbytes_per_sec": 0, 00:07:35.112 "w_mbytes_per_sec": 0 00:07:35.112 }, 00:07:35.112 "claimed": false, 00:07:35.112 "zoned": false, 00:07:35.112 "supported_io_types": { 00:07:35.112 "read": true, 00:07:35.112 "write": true, 00:07:35.112 "unmap": true, 00:07:35.112 "flush": true, 00:07:35.112 "reset": true, 00:07:35.112 "nvme_admin": false, 00:07:35.112 "nvme_io": false, 00:07:35.112 "nvme_io_md": false, 00:07:35.112 "write_zeroes": true, 00:07:35.112 "zcopy": false, 00:07:35.112 "get_zone_info": false, 00:07:35.112 "zone_management": false, 00:07:35.112 "zone_append": false, 00:07:35.112 "compare": false, 00:07:35.112 "compare_and_write": false, 00:07:35.112 "abort": false, 00:07:35.112 "seek_hole": false, 00:07:35.112 "seek_data": false, 00:07:35.112 "copy": false, 00:07:35.112 "nvme_iov_md": false 00:07:35.112 }, 00:07:35.112 "memory_domains": [ 00:07:35.112 { 00:07:35.112 "dma_device_id": "system", 00:07:35.112 "dma_device_type": 1 00:07:35.112 }, 00:07:35.112 { 00:07:35.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.112 "dma_device_type": 2 00:07:35.112 }, 00:07:35.112 { 00:07:35.112 "dma_device_id": "system", 00:07:35.112 "dma_device_type": 1 00:07:35.112 }, 00:07:35.112 { 00:07:35.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.112 "dma_device_type": 2 00:07:35.112 }, 00:07:35.112 { 00:07:35.112 "dma_device_id": "system", 00:07:35.112 "dma_device_type": 1 00:07:35.112 }, 00:07:35.112 { 00:07:35.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.112 "dma_device_type": 2 00:07:35.112 } 00:07:35.112 ], 00:07:35.112 "driver_specific": { 00:07:35.112 "raid": { 00:07:35.112 "uuid": "1973d28d-42d1-41aa-9ee3-7dbd2ab44f90", 00:07:35.112 "strip_size_kb": 64, 00:07:35.112 "state": "online", 00:07:35.112 "raid_level": "raid0", 00:07:35.112 "superblock": false, 00:07:35.112 "num_base_bdevs": 3, 00:07:35.112 "num_base_bdevs_discovered": 3, 00:07:35.112 "num_base_bdevs_operational": 3, 00:07:35.112 "base_bdevs_list": [ 00:07:35.112 { 00:07:35.112 "name": "NewBaseBdev", 00:07:35.112 "uuid": "f481daeb-d651-4013-97bd-d5cf696fc220", 00:07:35.112 "is_configured": true, 00:07:35.112 "data_offset": 0, 00:07:35.112 "data_size": 65536 00:07:35.112 }, 00:07:35.112 { 00:07:35.112 "name": "BaseBdev2", 00:07:35.112 "uuid": "1b2c2ffe-2068-470c-b6c1-dc99015f4313", 00:07:35.112 "is_configured": true, 00:07:35.112 "data_offset": 0, 00:07:35.112 "data_size": 65536 00:07:35.112 }, 00:07:35.112 { 00:07:35.112 "name": "BaseBdev3", 00:07:35.112 "uuid": "83bad1c5-491e-4ffa-a63f-ab759e842cca", 00:07:35.112 "is_configured": true, 00:07:35.112 "data_offset": 0, 00:07:35.112 "data_size": 65536 00:07:35.112 } 00:07:35.112 ] 00:07:35.112 } 00:07:35.112 } 00:07:35.112 }' 00:07:35.112 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:35.372 BaseBdev2 00:07:35.372 BaseBdev3' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.372 [2024-11-02 23:47:29.394459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.372 [2024-11-02 23:47:29.394495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.372 [2024-11-02 23:47:29.394604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.372 [2024-11-02 23:47:29.394669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.372 [2024-11-02 23:47:29.394685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74841 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 74841 ']' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 74841 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74841 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:35.372 killing process with pid 74841 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74841' 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 74841 00:07:35.372 [2024-11-02 23:47:29.444809] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.372 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 74841 00:07:35.640 [2024-11-02 23:47:29.502536] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:35.902 00:07:35.902 real 0m8.992s 00:07:35.902 user 0m15.075s 00:07:35.902 sys 0m1.923s 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.902 ************************************ 00:07:35.902 END TEST raid_state_function_test 00:07:35.902 ************************************ 00:07:35.902 23:47:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:35.902 23:47:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:35.902 23:47:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.902 23:47:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.902 ************************************ 00:07:35.902 START TEST raid_state_function_test_sb 00:07:35.902 ************************************ 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75446 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75446' 00:07:35.902 Process raid pid: 75446 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75446 00:07:35.902 23:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75446 ']' 00:07:35.903 23:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.903 23:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:35.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.903 23:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.903 23:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:35.903 23:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.161 [2024-11-02 23:47:30.001114] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:36.161 [2024-11-02 23:47:30.001229] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.161 [2024-11-02 23:47:30.157538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.161 [2024-11-02 23:47:30.197297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.420 [2024-11-02 23:47:30.273516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.420 [2024-11-02 23:47:30.273570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.990 [2024-11-02 23:47:30.857314] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.990 [2024-11-02 23:47:30.857407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.990 [2024-11-02 23:47:30.857420] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.990 [2024-11-02 23:47:30.857433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.990 [2024-11-02 23:47:30.857441] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:36.990 [2024-11-02 23:47:30.857457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.990 "name": "Existed_Raid", 00:07:36.990 "uuid": "94affb5a-ddad-45c7-8f38-2f7ad2cd76c0", 00:07:36.990 "strip_size_kb": 64, 00:07:36.990 "state": "configuring", 00:07:36.990 "raid_level": "raid0", 00:07:36.990 "superblock": true, 00:07:36.990 "num_base_bdevs": 3, 00:07:36.990 "num_base_bdevs_discovered": 0, 00:07:36.990 "num_base_bdevs_operational": 3, 00:07:36.990 "base_bdevs_list": [ 00:07:36.990 { 00:07:36.990 "name": "BaseBdev1", 00:07:36.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.990 "is_configured": false, 00:07:36.990 "data_offset": 0, 00:07:36.990 "data_size": 0 00:07:36.990 }, 00:07:36.990 { 00:07:36.990 "name": "BaseBdev2", 00:07:36.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.990 "is_configured": false, 00:07:36.990 "data_offset": 0, 00:07:36.990 "data_size": 0 00:07:36.990 }, 00:07:36.990 { 00:07:36.990 "name": "BaseBdev3", 00:07:36.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.990 "is_configured": false, 00:07:36.990 "data_offset": 0, 00:07:36.990 "data_size": 0 00:07:36.990 } 00:07:36.990 ] 00:07:36.990 }' 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.990 23:47:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.250 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.250 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.250 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.250 [2024-11-02 23:47:31.336284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.250 [2024-11-02 23:47:31.336346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:37.250 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.250 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:37.250 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.250 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.522 [2024-11-02 23:47:31.348256] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.522 [2024-11-02 23:47:31.348312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.522 [2024-11-02 23:47:31.348323] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.522 [2024-11-02 23:47:31.348335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.522 [2024-11-02 23:47:31.348343] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:37.522 [2024-11-02 23:47:31.348355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.522 [2024-11-02 23:47:31.375421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.522 BaseBdev1 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.522 [ 00:07:37.522 { 00:07:37.522 "name": "BaseBdev1", 00:07:37.522 "aliases": [ 00:07:37.522 "a2a39331-5614-4fd4-9d61-90aac4b2b6e9" 00:07:37.522 ], 00:07:37.522 "product_name": "Malloc disk", 00:07:37.522 "block_size": 512, 00:07:37.522 "num_blocks": 65536, 00:07:37.522 "uuid": "a2a39331-5614-4fd4-9d61-90aac4b2b6e9", 00:07:37.522 "assigned_rate_limits": { 00:07:37.522 "rw_ios_per_sec": 0, 00:07:37.522 "rw_mbytes_per_sec": 0, 00:07:37.522 "r_mbytes_per_sec": 0, 00:07:37.522 "w_mbytes_per_sec": 0 00:07:37.522 }, 00:07:37.522 "claimed": true, 00:07:37.522 "claim_type": "exclusive_write", 00:07:37.522 "zoned": false, 00:07:37.522 "supported_io_types": { 00:07:37.522 "read": true, 00:07:37.522 "write": true, 00:07:37.522 "unmap": true, 00:07:37.522 "flush": true, 00:07:37.522 "reset": true, 00:07:37.522 "nvme_admin": false, 00:07:37.522 "nvme_io": false, 00:07:37.522 "nvme_io_md": false, 00:07:37.522 "write_zeroes": true, 00:07:37.522 "zcopy": true, 00:07:37.522 "get_zone_info": false, 00:07:37.522 "zone_management": false, 00:07:37.522 "zone_append": false, 00:07:37.522 "compare": false, 00:07:37.522 "compare_and_write": false, 00:07:37.522 "abort": true, 00:07:37.522 "seek_hole": false, 00:07:37.522 "seek_data": false, 00:07:37.522 "copy": true, 00:07:37.522 "nvme_iov_md": false 00:07:37.522 }, 00:07:37.522 "memory_domains": [ 00:07:37.522 { 00:07:37.522 "dma_device_id": "system", 00:07:37.522 "dma_device_type": 1 00:07:37.522 }, 00:07:37.522 { 00:07:37.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.522 "dma_device_type": 2 00:07:37.522 } 00:07:37.522 ], 00:07:37.522 "driver_specific": {} 00:07:37.522 } 00:07:37.522 ] 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.522 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.522 "name": "Existed_Raid", 00:07:37.522 "uuid": "ceae2356-cd5e-474a-a8e0-26f60bfe8bd0", 00:07:37.522 "strip_size_kb": 64, 00:07:37.522 "state": "configuring", 00:07:37.522 "raid_level": "raid0", 00:07:37.522 "superblock": true, 00:07:37.522 "num_base_bdevs": 3, 00:07:37.522 "num_base_bdevs_discovered": 1, 00:07:37.522 "num_base_bdevs_operational": 3, 00:07:37.522 "base_bdevs_list": [ 00:07:37.522 { 00:07:37.522 "name": "BaseBdev1", 00:07:37.522 "uuid": "a2a39331-5614-4fd4-9d61-90aac4b2b6e9", 00:07:37.522 "is_configured": true, 00:07:37.522 "data_offset": 2048, 00:07:37.522 "data_size": 63488 00:07:37.522 }, 00:07:37.522 { 00:07:37.522 "name": "BaseBdev2", 00:07:37.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.522 "is_configured": false, 00:07:37.522 "data_offset": 0, 00:07:37.522 "data_size": 0 00:07:37.522 }, 00:07:37.522 { 00:07:37.522 "name": "BaseBdev3", 00:07:37.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.523 "is_configured": false, 00:07:37.523 "data_offset": 0, 00:07:37.523 "data_size": 0 00:07:37.523 } 00:07:37.523 ] 00:07:37.523 }' 00:07:37.523 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.523 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.794 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.794 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.794 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.794 [2024-11-02 23:47:31.862704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.794 [2024-11-02 23:47:31.862794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:37.794 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.795 [2024-11-02 23:47:31.874704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.795 [2024-11-02 23:47:31.876919] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.795 [2024-11-02 23:47:31.876971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.795 [2024-11-02 23:47:31.876984] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:37.795 [2024-11-02 23:47:31.876996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.795 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.054 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.054 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.054 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.054 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.054 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.054 "name": "Existed_Raid", 00:07:38.054 "uuid": "0ab8bf0d-894f-454a-9c20-2d7a958c3964", 00:07:38.054 "strip_size_kb": 64, 00:07:38.054 "state": "configuring", 00:07:38.054 "raid_level": "raid0", 00:07:38.054 "superblock": true, 00:07:38.054 "num_base_bdevs": 3, 00:07:38.054 "num_base_bdevs_discovered": 1, 00:07:38.054 "num_base_bdevs_operational": 3, 00:07:38.054 "base_bdevs_list": [ 00:07:38.054 { 00:07:38.054 "name": "BaseBdev1", 00:07:38.054 "uuid": "a2a39331-5614-4fd4-9d61-90aac4b2b6e9", 00:07:38.054 "is_configured": true, 00:07:38.054 "data_offset": 2048, 00:07:38.054 "data_size": 63488 00:07:38.054 }, 00:07:38.054 { 00:07:38.054 "name": "BaseBdev2", 00:07:38.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.054 "is_configured": false, 00:07:38.054 "data_offset": 0, 00:07:38.054 "data_size": 0 00:07:38.054 }, 00:07:38.054 { 00:07:38.054 "name": "BaseBdev3", 00:07:38.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.054 "is_configured": false, 00:07:38.054 "data_offset": 0, 00:07:38.054 "data_size": 0 00:07:38.054 } 00:07:38.054 ] 00:07:38.054 }' 00:07:38.054 23:47:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.054 23:47:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.314 [2024-11-02 23:47:32.338912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:38.314 BaseBdev2 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.314 [ 00:07:38.314 { 00:07:38.314 "name": "BaseBdev2", 00:07:38.314 "aliases": [ 00:07:38.314 "b72c2524-e791-499d-baa2-68de3a8f7244" 00:07:38.314 ], 00:07:38.314 "product_name": "Malloc disk", 00:07:38.314 "block_size": 512, 00:07:38.314 "num_blocks": 65536, 00:07:38.314 "uuid": "b72c2524-e791-499d-baa2-68de3a8f7244", 00:07:38.314 "assigned_rate_limits": { 00:07:38.314 "rw_ios_per_sec": 0, 00:07:38.314 "rw_mbytes_per_sec": 0, 00:07:38.314 "r_mbytes_per_sec": 0, 00:07:38.314 "w_mbytes_per_sec": 0 00:07:38.314 }, 00:07:38.314 "claimed": true, 00:07:38.314 "claim_type": "exclusive_write", 00:07:38.314 "zoned": false, 00:07:38.314 "supported_io_types": { 00:07:38.314 "read": true, 00:07:38.314 "write": true, 00:07:38.314 "unmap": true, 00:07:38.314 "flush": true, 00:07:38.314 "reset": true, 00:07:38.314 "nvme_admin": false, 00:07:38.314 "nvme_io": false, 00:07:38.314 "nvme_io_md": false, 00:07:38.314 "write_zeroes": true, 00:07:38.314 "zcopy": true, 00:07:38.314 "get_zone_info": false, 00:07:38.314 "zone_management": false, 00:07:38.314 "zone_append": false, 00:07:38.314 "compare": false, 00:07:38.314 "compare_and_write": false, 00:07:38.314 "abort": true, 00:07:38.314 "seek_hole": false, 00:07:38.314 "seek_data": false, 00:07:38.314 "copy": true, 00:07:38.314 "nvme_iov_md": false 00:07:38.314 }, 00:07:38.314 "memory_domains": [ 00:07:38.314 { 00:07:38.314 "dma_device_id": "system", 00:07:38.314 "dma_device_type": 1 00:07:38.314 }, 00:07:38.314 { 00:07:38.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.314 "dma_device_type": 2 00:07:38.314 } 00:07:38.314 ], 00:07:38.314 "driver_specific": {} 00:07:38.314 } 00:07:38.314 ] 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:38.314 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.315 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.574 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.574 "name": "Existed_Raid", 00:07:38.574 "uuid": "0ab8bf0d-894f-454a-9c20-2d7a958c3964", 00:07:38.574 "strip_size_kb": 64, 00:07:38.574 "state": "configuring", 00:07:38.574 "raid_level": "raid0", 00:07:38.574 "superblock": true, 00:07:38.574 "num_base_bdevs": 3, 00:07:38.574 "num_base_bdevs_discovered": 2, 00:07:38.574 "num_base_bdevs_operational": 3, 00:07:38.574 "base_bdevs_list": [ 00:07:38.574 { 00:07:38.574 "name": "BaseBdev1", 00:07:38.574 "uuid": "a2a39331-5614-4fd4-9d61-90aac4b2b6e9", 00:07:38.574 "is_configured": true, 00:07:38.574 "data_offset": 2048, 00:07:38.574 "data_size": 63488 00:07:38.574 }, 00:07:38.574 { 00:07:38.574 "name": "BaseBdev2", 00:07:38.574 "uuid": "b72c2524-e791-499d-baa2-68de3a8f7244", 00:07:38.574 "is_configured": true, 00:07:38.574 "data_offset": 2048, 00:07:38.574 "data_size": 63488 00:07:38.574 }, 00:07:38.574 { 00:07:38.574 "name": "BaseBdev3", 00:07:38.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.574 "is_configured": false, 00:07:38.574 "data_offset": 0, 00:07:38.574 "data_size": 0 00:07:38.574 } 00:07:38.574 ] 00:07:38.574 }' 00:07:38.574 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.574 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.834 BaseBdev3 00:07:38.834 [2024-11-02 23:47:32.864091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:38.834 [2024-11-02 23:47:32.864340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:38.834 [2024-11-02 23:47:32.864362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:38.834 [2024-11-02 23:47:32.864704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:38.834 [2024-11-02 23:47:32.864885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:38.834 [2024-11-02 23:47:32.864906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:38.834 [2024-11-02 23:47:32.865055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.834 [ 00:07:38.834 { 00:07:38.834 "name": "BaseBdev3", 00:07:38.834 "aliases": [ 00:07:38.834 "d462ca0b-2312-4e7b-9e91-9bf3d4bc7c58" 00:07:38.834 ], 00:07:38.834 "product_name": "Malloc disk", 00:07:38.834 "block_size": 512, 00:07:38.834 "num_blocks": 65536, 00:07:38.834 "uuid": "d462ca0b-2312-4e7b-9e91-9bf3d4bc7c58", 00:07:38.834 "assigned_rate_limits": { 00:07:38.834 "rw_ios_per_sec": 0, 00:07:38.834 "rw_mbytes_per_sec": 0, 00:07:38.834 "r_mbytes_per_sec": 0, 00:07:38.834 "w_mbytes_per_sec": 0 00:07:38.834 }, 00:07:38.834 "claimed": true, 00:07:38.834 "claim_type": "exclusive_write", 00:07:38.834 "zoned": false, 00:07:38.834 "supported_io_types": { 00:07:38.834 "read": true, 00:07:38.834 "write": true, 00:07:38.834 "unmap": true, 00:07:38.834 "flush": true, 00:07:38.834 "reset": true, 00:07:38.834 "nvme_admin": false, 00:07:38.834 "nvme_io": false, 00:07:38.834 "nvme_io_md": false, 00:07:38.834 "write_zeroes": true, 00:07:38.834 "zcopy": true, 00:07:38.834 "get_zone_info": false, 00:07:38.834 "zone_management": false, 00:07:38.834 "zone_append": false, 00:07:38.834 "compare": false, 00:07:38.834 "compare_and_write": false, 00:07:38.834 "abort": true, 00:07:38.834 "seek_hole": false, 00:07:38.834 "seek_data": false, 00:07:38.834 "copy": true, 00:07:38.834 "nvme_iov_md": false 00:07:38.834 }, 00:07:38.834 "memory_domains": [ 00:07:38.834 { 00:07:38.834 "dma_device_id": "system", 00:07:38.834 "dma_device_type": 1 00:07:38.834 }, 00:07:38.834 { 00:07:38.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.834 "dma_device_type": 2 00:07:38.834 } 00:07:38.834 ], 00:07:38.834 "driver_specific": {} 00:07:38.834 } 00:07:38.834 ] 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.834 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.835 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.094 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.094 "name": "Existed_Raid", 00:07:39.094 "uuid": "0ab8bf0d-894f-454a-9c20-2d7a958c3964", 00:07:39.094 "strip_size_kb": 64, 00:07:39.094 "state": "online", 00:07:39.094 "raid_level": "raid0", 00:07:39.094 "superblock": true, 00:07:39.094 "num_base_bdevs": 3, 00:07:39.094 "num_base_bdevs_discovered": 3, 00:07:39.094 "num_base_bdevs_operational": 3, 00:07:39.094 "base_bdevs_list": [ 00:07:39.094 { 00:07:39.094 "name": "BaseBdev1", 00:07:39.094 "uuid": "a2a39331-5614-4fd4-9d61-90aac4b2b6e9", 00:07:39.094 "is_configured": true, 00:07:39.094 "data_offset": 2048, 00:07:39.094 "data_size": 63488 00:07:39.094 }, 00:07:39.094 { 00:07:39.094 "name": "BaseBdev2", 00:07:39.094 "uuid": "b72c2524-e791-499d-baa2-68de3a8f7244", 00:07:39.094 "is_configured": true, 00:07:39.094 "data_offset": 2048, 00:07:39.094 "data_size": 63488 00:07:39.094 }, 00:07:39.094 { 00:07:39.094 "name": "BaseBdev3", 00:07:39.094 "uuid": "d462ca0b-2312-4e7b-9e91-9bf3d4bc7c58", 00:07:39.094 "is_configured": true, 00:07:39.094 "data_offset": 2048, 00:07:39.094 "data_size": 63488 00:07:39.094 } 00:07:39.094 ] 00:07:39.094 }' 00:07:39.094 23:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.094 23:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:39.354 [2024-11-02 23:47:33.299784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.354 "name": "Existed_Raid", 00:07:39.354 "aliases": [ 00:07:39.354 "0ab8bf0d-894f-454a-9c20-2d7a958c3964" 00:07:39.354 ], 00:07:39.354 "product_name": "Raid Volume", 00:07:39.354 "block_size": 512, 00:07:39.354 "num_blocks": 190464, 00:07:39.354 "uuid": "0ab8bf0d-894f-454a-9c20-2d7a958c3964", 00:07:39.354 "assigned_rate_limits": { 00:07:39.354 "rw_ios_per_sec": 0, 00:07:39.354 "rw_mbytes_per_sec": 0, 00:07:39.354 "r_mbytes_per_sec": 0, 00:07:39.354 "w_mbytes_per_sec": 0 00:07:39.354 }, 00:07:39.354 "claimed": false, 00:07:39.354 "zoned": false, 00:07:39.354 "supported_io_types": { 00:07:39.354 "read": true, 00:07:39.354 "write": true, 00:07:39.354 "unmap": true, 00:07:39.354 "flush": true, 00:07:39.354 "reset": true, 00:07:39.354 "nvme_admin": false, 00:07:39.354 "nvme_io": false, 00:07:39.354 "nvme_io_md": false, 00:07:39.354 "write_zeroes": true, 00:07:39.354 "zcopy": false, 00:07:39.354 "get_zone_info": false, 00:07:39.354 "zone_management": false, 00:07:39.354 "zone_append": false, 00:07:39.354 "compare": false, 00:07:39.354 "compare_and_write": false, 00:07:39.354 "abort": false, 00:07:39.354 "seek_hole": false, 00:07:39.354 "seek_data": false, 00:07:39.354 "copy": false, 00:07:39.354 "nvme_iov_md": false 00:07:39.354 }, 00:07:39.354 "memory_domains": [ 00:07:39.354 { 00:07:39.354 "dma_device_id": "system", 00:07:39.354 "dma_device_type": 1 00:07:39.354 }, 00:07:39.354 { 00:07:39.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.354 "dma_device_type": 2 00:07:39.354 }, 00:07:39.354 { 00:07:39.354 "dma_device_id": "system", 00:07:39.354 "dma_device_type": 1 00:07:39.354 }, 00:07:39.354 { 00:07:39.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.354 "dma_device_type": 2 00:07:39.354 }, 00:07:39.354 { 00:07:39.354 "dma_device_id": "system", 00:07:39.354 "dma_device_type": 1 00:07:39.354 }, 00:07:39.354 { 00:07:39.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.354 "dma_device_type": 2 00:07:39.354 } 00:07:39.354 ], 00:07:39.354 "driver_specific": { 00:07:39.354 "raid": { 00:07:39.354 "uuid": "0ab8bf0d-894f-454a-9c20-2d7a958c3964", 00:07:39.354 "strip_size_kb": 64, 00:07:39.354 "state": "online", 00:07:39.354 "raid_level": "raid0", 00:07:39.354 "superblock": true, 00:07:39.354 "num_base_bdevs": 3, 00:07:39.354 "num_base_bdevs_discovered": 3, 00:07:39.354 "num_base_bdevs_operational": 3, 00:07:39.354 "base_bdevs_list": [ 00:07:39.354 { 00:07:39.354 "name": "BaseBdev1", 00:07:39.354 "uuid": "a2a39331-5614-4fd4-9d61-90aac4b2b6e9", 00:07:39.354 "is_configured": true, 00:07:39.354 "data_offset": 2048, 00:07:39.354 "data_size": 63488 00:07:39.354 }, 00:07:39.354 { 00:07:39.354 "name": "BaseBdev2", 00:07:39.354 "uuid": "b72c2524-e791-499d-baa2-68de3a8f7244", 00:07:39.354 "is_configured": true, 00:07:39.354 "data_offset": 2048, 00:07:39.354 "data_size": 63488 00:07:39.354 }, 00:07:39.354 { 00:07:39.354 "name": "BaseBdev3", 00:07:39.354 "uuid": "d462ca0b-2312-4e7b-9e91-9bf3d4bc7c58", 00:07:39.354 "is_configured": true, 00:07:39.354 "data_offset": 2048, 00:07:39.354 "data_size": 63488 00:07:39.354 } 00:07:39.354 ] 00:07:39.354 } 00:07:39.354 } 00:07:39.354 }' 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:39.354 BaseBdev2 00:07:39.354 BaseBdev3' 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.354 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.614 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.614 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.614 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.614 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.614 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:39.614 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.614 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 [2024-11-02 23:47:33.582978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:39.615 [2024-11-02 23:47:33.583023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.615 [2024-11-02 23:47:33.583107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.615 "name": "Existed_Raid", 00:07:39.615 "uuid": "0ab8bf0d-894f-454a-9c20-2d7a958c3964", 00:07:39.615 "strip_size_kb": 64, 00:07:39.615 "state": "offline", 00:07:39.615 "raid_level": "raid0", 00:07:39.615 "superblock": true, 00:07:39.615 "num_base_bdevs": 3, 00:07:39.615 "num_base_bdevs_discovered": 2, 00:07:39.615 "num_base_bdevs_operational": 2, 00:07:39.615 "base_bdevs_list": [ 00:07:39.615 { 00:07:39.615 "name": null, 00:07:39.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.615 "is_configured": false, 00:07:39.615 "data_offset": 0, 00:07:39.615 "data_size": 63488 00:07:39.615 }, 00:07:39.615 { 00:07:39.615 "name": "BaseBdev2", 00:07:39.615 "uuid": "b72c2524-e791-499d-baa2-68de3a8f7244", 00:07:39.615 "is_configured": true, 00:07:39.615 "data_offset": 2048, 00:07:39.615 "data_size": 63488 00:07:39.615 }, 00:07:39.615 { 00:07:39.615 "name": "BaseBdev3", 00:07:39.615 "uuid": "d462ca0b-2312-4e7b-9e91-9bf3d4bc7c58", 00:07:39.615 "is_configured": true, 00:07:39.615 "data_offset": 2048, 00:07:39.615 "data_size": 63488 00:07:39.615 } 00:07:39.615 ] 00:07:39.615 }' 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.615 23:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.184 [2024-11-02 23:47:34.095096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.184 [2024-11-02 23:47:34.172312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:40.184 [2024-11-02 23:47:34.172384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:40.184 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.185 BaseBdev2 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.185 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.450 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.450 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:40.450 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.451 [ 00:07:40.451 { 00:07:40.451 "name": "BaseBdev2", 00:07:40.451 "aliases": [ 00:07:40.451 "6c6dbc3b-4841-4d17-80fd-ba3711d41074" 00:07:40.451 ], 00:07:40.451 "product_name": "Malloc disk", 00:07:40.451 "block_size": 512, 00:07:40.451 "num_blocks": 65536, 00:07:40.451 "uuid": "6c6dbc3b-4841-4d17-80fd-ba3711d41074", 00:07:40.451 "assigned_rate_limits": { 00:07:40.451 "rw_ios_per_sec": 0, 00:07:40.451 "rw_mbytes_per_sec": 0, 00:07:40.451 "r_mbytes_per_sec": 0, 00:07:40.451 "w_mbytes_per_sec": 0 00:07:40.451 }, 00:07:40.451 "claimed": false, 00:07:40.451 "zoned": false, 00:07:40.451 "supported_io_types": { 00:07:40.451 "read": true, 00:07:40.451 "write": true, 00:07:40.451 "unmap": true, 00:07:40.451 "flush": true, 00:07:40.451 "reset": true, 00:07:40.451 "nvme_admin": false, 00:07:40.451 "nvme_io": false, 00:07:40.451 "nvme_io_md": false, 00:07:40.451 "write_zeroes": true, 00:07:40.451 "zcopy": true, 00:07:40.451 "get_zone_info": false, 00:07:40.451 "zone_management": false, 00:07:40.451 "zone_append": false, 00:07:40.451 "compare": false, 00:07:40.451 "compare_and_write": false, 00:07:40.451 "abort": true, 00:07:40.451 "seek_hole": false, 00:07:40.451 "seek_data": false, 00:07:40.451 "copy": true, 00:07:40.451 "nvme_iov_md": false 00:07:40.451 }, 00:07:40.451 "memory_domains": [ 00:07:40.451 { 00:07:40.451 "dma_device_id": "system", 00:07:40.451 "dma_device_type": 1 00:07:40.451 }, 00:07:40.451 { 00:07:40.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.451 "dma_device_type": 2 00:07:40.451 } 00:07:40.451 ], 00:07:40.451 "driver_specific": {} 00:07:40.451 } 00:07:40.451 ] 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.451 BaseBdev3 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.451 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.451 [ 00:07:40.451 { 00:07:40.451 "name": "BaseBdev3", 00:07:40.451 "aliases": [ 00:07:40.451 "006515ea-6e87-4375-b566-7b2483d5797c" 00:07:40.452 ], 00:07:40.452 "product_name": "Malloc disk", 00:07:40.452 "block_size": 512, 00:07:40.452 "num_blocks": 65536, 00:07:40.452 "uuid": "006515ea-6e87-4375-b566-7b2483d5797c", 00:07:40.452 "assigned_rate_limits": { 00:07:40.452 "rw_ios_per_sec": 0, 00:07:40.452 "rw_mbytes_per_sec": 0, 00:07:40.452 "r_mbytes_per_sec": 0, 00:07:40.452 "w_mbytes_per_sec": 0 00:07:40.452 }, 00:07:40.452 "claimed": false, 00:07:40.452 "zoned": false, 00:07:40.452 "supported_io_types": { 00:07:40.452 "read": true, 00:07:40.452 "write": true, 00:07:40.452 "unmap": true, 00:07:40.452 "flush": true, 00:07:40.452 "reset": true, 00:07:40.452 "nvme_admin": false, 00:07:40.452 "nvme_io": false, 00:07:40.452 "nvme_io_md": false, 00:07:40.452 "write_zeroes": true, 00:07:40.452 "zcopy": true, 00:07:40.452 "get_zone_info": false, 00:07:40.452 "zone_management": false, 00:07:40.452 "zone_append": false, 00:07:40.452 "compare": false, 00:07:40.452 "compare_and_write": false, 00:07:40.452 "abort": true, 00:07:40.452 "seek_hole": false, 00:07:40.452 "seek_data": false, 00:07:40.452 "copy": true, 00:07:40.452 "nvme_iov_md": false 00:07:40.452 }, 00:07:40.452 "memory_domains": [ 00:07:40.452 { 00:07:40.452 "dma_device_id": "system", 00:07:40.452 "dma_device_type": 1 00:07:40.452 }, 00:07:40.452 { 00:07:40.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.452 "dma_device_type": 2 00:07:40.452 } 00:07:40.452 ], 00:07:40.452 "driver_specific": {} 00:07:40.452 } 00:07:40.452 ] 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.452 [2024-11-02 23:47:34.371641] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.452 [2024-11-02 23:47:34.371715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.452 [2024-11-02 23:47:34.371768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:40.452 [2024-11-02 23:47:34.374062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.452 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.453 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.453 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.453 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.453 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.453 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.453 "name": "Existed_Raid", 00:07:40.453 "uuid": "bd38cc83-6979-438c-837a-ec55439ce52d", 00:07:40.453 "strip_size_kb": 64, 00:07:40.453 "state": "configuring", 00:07:40.453 "raid_level": "raid0", 00:07:40.453 "superblock": true, 00:07:40.453 "num_base_bdevs": 3, 00:07:40.453 "num_base_bdevs_discovered": 2, 00:07:40.453 "num_base_bdevs_operational": 3, 00:07:40.453 "base_bdevs_list": [ 00:07:40.453 { 00:07:40.453 "name": "BaseBdev1", 00:07:40.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.453 "is_configured": false, 00:07:40.453 "data_offset": 0, 00:07:40.453 "data_size": 0 00:07:40.453 }, 00:07:40.453 { 00:07:40.453 "name": "BaseBdev2", 00:07:40.453 "uuid": "6c6dbc3b-4841-4d17-80fd-ba3711d41074", 00:07:40.453 "is_configured": true, 00:07:40.453 "data_offset": 2048, 00:07:40.453 "data_size": 63488 00:07:40.453 }, 00:07:40.453 { 00:07:40.453 "name": "BaseBdev3", 00:07:40.453 "uuid": "006515ea-6e87-4375-b566-7b2483d5797c", 00:07:40.453 "is_configured": true, 00:07:40.453 "data_offset": 2048, 00:07:40.453 "data_size": 63488 00:07:40.453 } 00:07:40.453 ] 00:07:40.453 }' 00:07:40.453 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.453 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.715 [2024-11-02 23:47:34.782936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.715 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.974 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.974 "name": "Existed_Raid", 00:07:40.974 "uuid": "bd38cc83-6979-438c-837a-ec55439ce52d", 00:07:40.974 "strip_size_kb": 64, 00:07:40.974 "state": "configuring", 00:07:40.974 "raid_level": "raid0", 00:07:40.974 "superblock": true, 00:07:40.974 "num_base_bdevs": 3, 00:07:40.974 "num_base_bdevs_discovered": 1, 00:07:40.974 "num_base_bdevs_operational": 3, 00:07:40.974 "base_bdevs_list": [ 00:07:40.974 { 00:07:40.974 "name": "BaseBdev1", 00:07:40.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.974 "is_configured": false, 00:07:40.974 "data_offset": 0, 00:07:40.974 "data_size": 0 00:07:40.974 }, 00:07:40.974 { 00:07:40.974 "name": null, 00:07:40.974 "uuid": "6c6dbc3b-4841-4d17-80fd-ba3711d41074", 00:07:40.974 "is_configured": false, 00:07:40.974 "data_offset": 0, 00:07:40.974 "data_size": 63488 00:07:40.974 }, 00:07:40.974 { 00:07:40.974 "name": "BaseBdev3", 00:07:40.974 "uuid": "006515ea-6e87-4375-b566-7b2483d5797c", 00:07:40.974 "is_configured": true, 00:07:40.974 "data_offset": 2048, 00:07:40.974 "data_size": 63488 00:07:40.974 } 00:07:40.974 ] 00:07:40.974 }' 00:07:40.974 23:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.974 23:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.234 [2024-11-02 23:47:35.239345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.234 BaseBdev1 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.234 [ 00:07:41.234 { 00:07:41.234 "name": "BaseBdev1", 00:07:41.234 "aliases": [ 00:07:41.234 "fcfc78dd-0732-4bf6-b08d-e80e732afa02" 00:07:41.234 ], 00:07:41.234 "product_name": "Malloc disk", 00:07:41.234 "block_size": 512, 00:07:41.234 "num_blocks": 65536, 00:07:41.234 "uuid": "fcfc78dd-0732-4bf6-b08d-e80e732afa02", 00:07:41.234 "assigned_rate_limits": { 00:07:41.234 "rw_ios_per_sec": 0, 00:07:41.234 "rw_mbytes_per_sec": 0, 00:07:41.234 "r_mbytes_per_sec": 0, 00:07:41.234 "w_mbytes_per_sec": 0 00:07:41.234 }, 00:07:41.234 "claimed": true, 00:07:41.234 "claim_type": "exclusive_write", 00:07:41.234 "zoned": false, 00:07:41.234 "supported_io_types": { 00:07:41.234 "read": true, 00:07:41.234 "write": true, 00:07:41.234 "unmap": true, 00:07:41.234 "flush": true, 00:07:41.234 "reset": true, 00:07:41.234 "nvme_admin": false, 00:07:41.234 "nvme_io": false, 00:07:41.234 "nvme_io_md": false, 00:07:41.234 "write_zeroes": true, 00:07:41.234 "zcopy": true, 00:07:41.234 "get_zone_info": false, 00:07:41.234 "zone_management": false, 00:07:41.234 "zone_append": false, 00:07:41.234 "compare": false, 00:07:41.234 "compare_and_write": false, 00:07:41.234 "abort": true, 00:07:41.234 "seek_hole": false, 00:07:41.234 "seek_data": false, 00:07:41.234 "copy": true, 00:07:41.234 "nvme_iov_md": false 00:07:41.234 }, 00:07:41.234 "memory_domains": [ 00:07:41.234 { 00:07:41.234 "dma_device_id": "system", 00:07:41.234 "dma_device_type": 1 00:07:41.234 }, 00:07:41.234 { 00:07:41.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.234 "dma_device_type": 2 00:07:41.234 } 00:07:41.234 ], 00:07:41.234 "driver_specific": {} 00:07:41.234 } 00:07:41.234 ] 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.234 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.494 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.494 "name": "Existed_Raid", 00:07:41.494 "uuid": "bd38cc83-6979-438c-837a-ec55439ce52d", 00:07:41.494 "strip_size_kb": 64, 00:07:41.494 "state": "configuring", 00:07:41.494 "raid_level": "raid0", 00:07:41.494 "superblock": true, 00:07:41.494 "num_base_bdevs": 3, 00:07:41.494 "num_base_bdevs_discovered": 2, 00:07:41.494 "num_base_bdevs_operational": 3, 00:07:41.494 "base_bdevs_list": [ 00:07:41.494 { 00:07:41.494 "name": "BaseBdev1", 00:07:41.494 "uuid": "fcfc78dd-0732-4bf6-b08d-e80e732afa02", 00:07:41.494 "is_configured": true, 00:07:41.494 "data_offset": 2048, 00:07:41.494 "data_size": 63488 00:07:41.494 }, 00:07:41.494 { 00:07:41.494 "name": null, 00:07:41.494 "uuid": "6c6dbc3b-4841-4d17-80fd-ba3711d41074", 00:07:41.494 "is_configured": false, 00:07:41.494 "data_offset": 0, 00:07:41.494 "data_size": 63488 00:07:41.494 }, 00:07:41.494 { 00:07:41.494 "name": "BaseBdev3", 00:07:41.494 "uuid": "006515ea-6e87-4375-b566-7b2483d5797c", 00:07:41.494 "is_configured": true, 00:07:41.494 "data_offset": 2048, 00:07:41.494 "data_size": 63488 00:07:41.494 } 00:07:41.494 ] 00:07:41.494 }' 00:07:41.494 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.494 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.754 [2024-11-02 23:47:35.782556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.754 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.755 "name": "Existed_Raid", 00:07:41.755 "uuid": "bd38cc83-6979-438c-837a-ec55439ce52d", 00:07:41.755 "strip_size_kb": 64, 00:07:41.755 "state": "configuring", 00:07:41.755 "raid_level": "raid0", 00:07:41.755 "superblock": true, 00:07:41.755 "num_base_bdevs": 3, 00:07:41.755 "num_base_bdevs_discovered": 1, 00:07:41.755 "num_base_bdevs_operational": 3, 00:07:41.755 "base_bdevs_list": [ 00:07:41.755 { 00:07:41.755 "name": "BaseBdev1", 00:07:41.755 "uuid": "fcfc78dd-0732-4bf6-b08d-e80e732afa02", 00:07:41.755 "is_configured": true, 00:07:41.755 "data_offset": 2048, 00:07:41.755 "data_size": 63488 00:07:41.755 }, 00:07:41.755 { 00:07:41.755 "name": null, 00:07:41.755 "uuid": "6c6dbc3b-4841-4d17-80fd-ba3711d41074", 00:07:41.755 "is_configured": false, 00:07:41.755 "data_offset": 0, 00:07:41.755 "data_size": 63488 00:07:41.755 }, 00:07:41.755 { 00:07:41.755 "name": null, 00:07:41.755 "uuid": "006515ea-6e87-4375-b566-7b2483d5797c", 00:07:41.755 "is_configured": false, 00:07:41.755 "data_offset": 0, 00:07:41.755 "data_size": 63488 00:07:41.755 } 00:07:41.755 ] 00:07:41.755 }' 00:07:41.755 23:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.755 23:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.343 [2024-11-02 23:47:36.282541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.343 "name": "Existed_Raid", 00:07:42.343 "uuid": "bd38cc83-6979-438c-837a-ec55439ce52d", 00:07:42.343 "strip_size_kb": 64, 00:07:42.343 "state": "configuring", 00:07:42.343 "raid_level": "raid0", 00:07:42.343 "superblock": true, 00:07:42.343 "num_base_bdevs": 3, 00:07:42.343 "num_base_bdevs_discovered": 2, 00:07:42.343 "num_base_bdevs_operational": 3, 00:07:42.343 "base_bdevs_list": [ 00:07:42.343 { 00:07:42.343 "name": "BaseBdev1", 00:07:42.343 "uuid": "fcfc78dd-0732-4bf6-b08d-e80e732afa02", 00:07:42.343 "is_configured": true, 00:07:42.343 "data_offset": 2048, 00:07:42.343 "data_size": 63488 00:07:42.343 }, 00:07:42.343 { 00:07:42.343 "name": null, 00:07:42.343 "uuid": "6c6dbc3b-4841-4d17-80fd-ba3711d41074", 00:07:42.343 "is_configured": false, 00:07:42.343 "data_offset": 0, 00:07:42.343 "data_size": 63488 00:07:42.343 }, 00:07:42.343 { 00:07:42.343 "name": "BaseBdev3", 00:07:42.343 "uuid": "006515ea-6e87-4375-b566-7b2483d5797c", 00:07:42.343 "is_configured": true, 00:07:42.343 "data_offset": 2048, 00:07:42.343 "data_size": 63488 00:07:42.343 } 00:07:42.343 ] 00:07:42.343 }' 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.343 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.613 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.613 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.613 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.613 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.873 [2024-11-02 23:47:36.754123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.873 "name": "Existed_Raid", 00:07:42.873 "uuid": "bd38cc83-6979-438c-837a-ec55439ce52d", 00:07:42.873 "strip_size_kb": 64, 00:07:42.873 "state": "configuring", 00:07:42.873 "raid_level": "raid0", 00:07:42.873 "superblock": true, 00:07:42.873 "num_base_bdevs": 3, 00:07:42.873 "num_base_bdevs_discovered": 1, 00:07:42.873 "num_base_bdevs_operational": 3, 00:07:42.873 "base_bdevs_list": [ 00:07:42.873 { 00:07:42.873 "name": null, 00:07:42.873 "uuid": "fcfc78dd-0732-4bf6-b08d-e80e732afa02", 00:07:42.873 "is_configured": false, 00:07:42.873 "data_offset": 0, 00:07:42.873 "data_size": 63488 00:07:42.873 }, 00:07:42.873 { 00:07:42.873 "name": null, 00:07:42.873 "uuid": "6c6dbc3b-4841-4d17-80fd-ba3711d41074", 00:07:42.873 "is_configured": false, 00:07:42.873 "data_offset": 0, 00:07:42.873 "data_size": 63488 00:07:42.873 }, 00:07:42.873 { 00:07:42.873 "name": "BaseBdev3", 00:07:42.873 "uuid": "006515ea-6e87-4375-b566-7b2483d5797c", 00:07:42.873 "is_configured": true, 00:07:42.873 "data_offset": 2048, 00:07:42.873 "data_size": 63488 00:07:42.873 } 00:07:42.873 ] 00:07:42.873 }' 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.873 23:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.132 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:43.132 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.132 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.132 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.132 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.132 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:43.132 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:43.132 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.132 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.132 [2024-11-02 23:47:37.221513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.391 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.391 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:43.391 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.391 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.392 "name": "Existed_Raid", 00:07:43.392 "uuid": "bd38cc83-6979-438c-837a-ec55439ce52d", 00:07:43.392 "strip_size_kb": 64, 00:07:43.392 "state": "configuring", 00:07:43.392 "raid_level": "raid0", 00:07:43.392 "superblock": true, 00:07:43.392 "num_base_bdevs": 3, 00:07:43.392 "num_base_bdevs_discovered": 2, 00:07:43.392 "num_base_bdevs_operational": 3, 00:07:43.392 "base_bdevs_list": [ 00:07:43.392 { 00:07:43.392 "name": null, 00:07:43.392 "uuid": "fcfc78dd-0732-4bf6-b08d-e80e732afa02", 00:07:43.392 "is_configured": false, 00:07:43.392 "data_offset": 0, 00:07:43.392 "data_size": 63488 00:07:43.392 }, 00:07:43.392 { 00:07:43.392 "name": "BaseBdev2", 00:07:43.392 "uuid": "6c6dbc3b-4841-4d17-80fd-ba3711d41074", 00:07:43.392 "is_configured": true, 00:07:43.392 "data_offset": 2048, 00:07:43.392 "data_size": 63488 00:07:43.392 }, 00:07:43.392 { 00:07:43.392 "name": "BaseBdev3", 00:07:43.392 "uuid": "006515ea-6e87-4375-b566-7b2483d5797c", 00:07:43.392 "is_configured": true, 00:07:43.392 "data_offset": 2048, 00:07:43.392 "data_size": 63488 00:07:43.392 } 00:07:43.392 ] 00:07:43.392 }' 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.392 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fcfc78dd-0732-4bf6-b08d-e80e732afa02 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.651 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.910 [2024-11-02 23:47:37.753576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:43.910 [2024-11-02 23:47:37.753825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:43.910 [2024-11-02 23:47:37.753848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:43.910 [2024-11-02 23:47:37.754138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:43.910 [2024-11-02 23:47:37.754293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:43.910 [2024-11-02 23:47:37.754313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:07:43.910 NewBaseBdev 00:07:43.910 [2024-11-02 23:47:37.754485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.910 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.910 [ 00:07:43.910 { 00:07:43.910 "name": "NewBaseBdev", 00:07:43.910 "aliases": [ 00:07:43.910 "fcfc78dd-0732-4bf6-b08d-e80e732afa02" 00:07:43.910 ], 00:07:43.910 "product_name": "Malloc disk", 00:07:43.910 "block_size": 512, 00:07:43.910 "num_blocks": 65536, 00:07:43.911 "uuid": "fcfc78dd-0732-4bf6-b08d-e80e732afa02", 00:07:43.911 "assigned_rate_limits": { 00:07:43.911 "rw_ios_per_sec": 0, 00:07:43.911 "rw_mbytes_per_sec": 0, 00:07:43.911 "r_mbytes_per_sec": 0, 00:07:43.911 "w_mbytes_per_sec": 0 00:07:43.911 }, 00:07:43.911 "claimed": true, 00:07:43.911 "claim_type": "exclusive_write", 00:07:43.911 "zoned": false, 00:07:43.911 "supported_io_types": { 00:07:43.911 "read": true, 00:07:43.911 "write": true, 00:07:43.911 "unmap": true, 00:07:43.911 "flush": true, 00:07:43.911 "reset": true, 00:07:43.911 "nvme_admin": false, 00:07:43.911 "nvme_io": false, 00:07:43.911 "nvme_io_md": false, 00:07:43.911 "write_zeroes": true, 00:07:43.911 "zcopy": true, 00:07:43.911 "get_zone_info": false, 00:07:43.911 "zone_management": false, 00:07:43.911 "zone_append": false, 00:07:43.911 "compare": false, 00:07:43.911 "compare_and_write": false, 00:07:43.911 "abort": true, 00:07:43.911 "seek_hole": false, 00:07:43.911 "seek_data": false, 00:07:43.911 "copy": true, 00:07:43.911 "nvme_iov_md": false 00:07:43.911 }, 00:07:43.911 "memory_domains": [ 00:07:43.911 { 00:07:43.911 "dma_device_id": "system", 00:07:43.911 "dma_device_type": 1 00:07:43.911 }, 00:07:43.911 { 00:07:43.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.911 "dma_device_type": 2 00:07:43.911 } 00:07:43.911 ], 00:07:43.911 "driver_specific": {} 00:07:43.911 } 00:07:43.911 ] 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.911 "name": "Existed_Raid", 00:07:43.911 "uuid": "bd38cc83-6979-438c-837a-ec55439ce52d", 00:07:43.911 "strip_size_kb": 64, 00:07:43.911 "state": "online", 00:07:43.911 "raid_level": "raid0", 00:07:43.911 "superblock": true, 00:07:43.911 "num_base_bdevs": 3, 00:07:43.911 "num_base_bdevs_discovered": 3, 00:07:43.911 "num_base_bdevs_operational": 3, 00:07:43.911 "base_bdevs_list": [ 00:07:43.911 { 00:07:43.911 "name": "NewBaseBdev", 00:07:43.911 "uuid": "fcfc78dd-0732-4bf6-b08d-e80e732afa02", 00:07:43.911 "is_configured": true, 00:07:43.911 "data_offset": 2048, 00:07:43.911 "data_size": 63488 00:07:43.911 }, 00:07:43.911 { 00:07:43.911 "name": "BaseBdev2", 00:07:43.911 "uuid": "6c6dbc3b-4841-4d17-80fd-ba3711d41074", 00:07:43.911 "is_configured": true, 00:07:43.911 "data_offset": 2048, 00:07:43.911 "data_size": 63488 00:07:43.911 }, 00:07:43.911 { 00:07:43.911 "name": "BaseBdev3", 00:07:43.911 "uuid": "006515ea-6e87-4375-b566-7b2483d5797c", 00:07:43.911 "is_configured": true, 00:07:43.911 "data_offset": 2048, 00:07:43.911 "data_size": 63488 00:07:43.911 } 00:07:43.911 ] 00:07:43.911 }' 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.911 23:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.170 [2024-11-02 23:47:38.245159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.170 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.429 "name": "Existed_Raid", 00:07:44.429 "aliases": [ 00:07:44.429 "bd38cc83-6979-438c-837a-ec55439ce52d" 00:07:44.429 ], 00:07:44.429 "product_name": "Raid Volume", 00:07:44.429 "block_size": 512, 00:07:44.429 "num_blocks": 190464, 00:07:44.429 "uuid": "bd38cc83-6979-438c-837a-ec55439ce52d", 00:07:44.429 "assigned_rate_limits": { 00:07:44.429 "rw_ios_per_sec": 0, 00:07:44.429 "rw_mbytes_per_sec": 0, 00:07:44.429 "r_mbytes_per_sec": 0, 00:07:44.429 "w_mbytes_per_sec": 0 00:07:44.429 }, 00:07:44.429 "claimed": false, 00:07:44.429 "zoned": false, 00:07:44.429 "supported_io_types": { 00:07:44.429 "read": true, 00:07:44.429 "write": true, 00:07:44.429 "unmap": true, 00:07:44.429 "flush": true, 00:07:44.429 "reset": true, 00:07:44.429 "nvme_admin": false, 00:07:44.429 "nvme_io": false, 00:07:44.429 "nvme_io_md": false, 00:07:44.429 "write_zeroes": true, 00:07:44.429 "zcopy": false, 00:07:44.429 "get_zone_info": false, 00:07:44.429 "zone_management": false, 00:07:44.429 "zone_append": false, 00:07:44.429 "compare": false, 00:07:44.429 "compare_and_write": false, 00:07:44.429 "abort": false, 00:07:44.429 "seek_hole": false, 00:07:44.429 "seek_data": false, 00:07:44.429 "copy": false, 00:07:44.429 "nvme_iov_md": false 00:07:44.429 }, 00:07:44.429 "memory_domains": [ 00:07:44.429 { 00:07:44.429 "dma_device_id": "system", 00:07:44.429 "dma_device_type": 1 00:07:44.429 }, 00:07:44.429 { 00:07:44.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.429 "dma_device_type": 2 00:07:44.429 }, 00:07:44.429 { 00:07:44.429 "dma_device_id": "system", 00:07:44.429 "dma_device_type": 1 00:07:44.429 }, 00:07:44.429 { 00:07:44.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.429 "dma_device_type": 2 00:07:44.429 }, 00:07:44.429 { 00:07:44.429 "dma_device_id": "system", 00:07:44.429 "dma_device_type": 1 00:07:44.429 }, 00:07:44.429 { 00:07:44.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.429 "dma_device_type": 2 00:07:44.429 } 00:07:44.429 ], 00:07:44.429 "driver_specific": { 00:07:44.429 "raid": { 00:07:44.429 "uuid": "bd38cc83-6979-438c-837a-ec55439ce52d", 00:07:44.429 "strip_size_kb": 64, 00:07:44.429 "state": "online", 00:07:44.429 "raid_level": "raid0", 00:07:44.429 "superblock": true, 00:07:44.429 "num_base_bdevs": 3, 00:07:44.429 "num_base_bdevs_discovered": 3, 00:07:44.429 "num_base_bdevs_operational": 3, 00:07:44.429 "base_bdevs_list": [ 00:07:44.429 { 00:07:44.429 "name": "NewBaseBdev", 00:07:44.429 "uuid": "fcfc78dd-0732-4bf6-b08d-e80e732afa02", 00:07:44.429 "is_configured": true, 00:07:44.429 "data_offset": 2048, 00:07:44.429 "data_size": 63488 00:07:44.429 }, 00:07:44.429 { 00:07:44.429 "name": "BaseBdev2", 00:07:44.429 "uuid": "6c6dbc3b-4841-4d17-80fd-ba3711d41074", 00:07:44.429 "is_configured": true, 00:07:44.429 "data_offset": 2048, 00:07:44.429 "data_size": 63488 00:07:44.429 }, 00:07:44.429 { 00:07:44.429 "name": "BaseBdev3", 00:07:44.429 "uuid": "006515ea-6e87-4375-b566-7b2483d5797c", 00:07:44.429 "is_configured": true, 00:07:44.429 "data_offset": 2048, 00:07:44.429 "data_size": 63488 00:07:44.429 } 00:07:44.429 ] 00:07:44.429 } 00:07:44.429 } 00:07:44.429 }' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:44.429 BaseBdev2 00:07:44.429 BaseBdev3' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.429 [2024-11-02 23:47:38.476435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.429 [2024-11-02 23:47:38.476495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.429 [2024-11-02 23:47:38.476620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.429 [2024-11-02 23:47:38.476690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.429 [2024-11-02 23:47:38.476706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75446 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75446 ']' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 75446 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75446 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:44.429 killing process with pid 75446 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75446' 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 75446 00:07:44.429 [2024-11-02 23:47:38.518684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.429 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 75446 00:07:44.688 [2024-11-02 23:47:38.579480] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.948 23:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:44.948 00:07:44.948 real 0m9.000s 00:07:44.948 user 0m15.078s 00:07:44.948 sys 0m1.935s 00:07:44.948 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.948 23:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.948 ************************************ 00:07:44.948 END TEST raid_state_function_test_sb 00:07:44.948 ************************************ 00:07:44.948 23:47:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:07:44.948 23:47:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:44.948 23:47:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.948 23:47:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.948 ************************************ 00:07:44.948 START TEST raid_superblock_test 00:07:44.948 ************************************ 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:44.948 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76055 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76055 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 76055 ']' 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:44.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:44.949 23:47:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.208 [2024-11-02 23:47:39.078538] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:45.208 [2024-11-02 23:47:39.078691] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76055 ] 00:07:45.208 [2024-11-02 23:47:39.216785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.208 [2024-11-02 23:47:39.264155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.467 [2024-11-02 23:47:39.342389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.467 [2024-11-02 23:47:39.342439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.050 malloc1 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.050 23:47:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.050 [2024-11-02 23:47:39.999261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:46.050 [2024-11-02 23:47:39.999330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.050 [2024-11-02 23:47:39.999352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:46.050 [2024-11-02 23:47:39.999365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.050 [2024-11-02 23:47:40.001511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.050 [2024-11-02 23:47:40.001555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:46.050 pt1 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.050 malloc2 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.050 [2024-11-02 23:47:40.028114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.050 [2024-11-02 23:47:40.028163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.050 [2024-11-02 23:47:40.028178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:46.050 [2024-11-02 23:47:40.028189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.050 [2024-11-02 23:47:40.030262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.050 [2024-11-02 23:47:40.030301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.050 pt2 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:46.050 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.051 malloc3 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.051 [2024-11-02 23:47:40.056917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:46.051 [2024-11-02 23:47:40.056972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.051 [2024-11-02 23:47:40.056990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:46.051 [2024-11-02 23:47:40.057000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.051 [2024-11-02 23:47:40.059298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.051 [2024-11-02 23:47:40.059336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:46.051 pt3 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.051 [2024-11-02 23:47:40.068962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.051 [2024-11-02 23:47:40.070859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.051 [2024-11-02 23:47:40.070921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:46.051 [2024-11-02 23:47:40.071067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:46.051 [2024-11-02 23:47:40.071085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:46.051 [2024-11-02 23:47:40.071359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:46.051 [2024-11-02 23:47:40.071507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:46.051 [2024-11-02 23:47:40.071533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:46.051 [2024-11-02 23:47:40.071652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.051 "name": "raid_bdev1", 00:07:46.051 "uuid": "bb1aba98-659b-4024-87f8-8bf4deb4d495", 00:07:46.051 "strip_size_kb": 64, 00:07:46.051 "state": "online", 00:07:46.051 "raid_level": "raid0", 00:07:46.051 "superblock": true, 00:07:46.051 "num_base_bdevs": 3, 00:07:46.051 "num_base_bdevs_discovered": 3, 00:07:46.051 "num_base_bdevs_operational": 3, 00:07:46.051 "base_bdevs_list": [ 00:07:46.051 { 00:07:46.051 "name": "pt1", 00:07:46.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.051 "is_configured": true, 00:07:46.051 "data_offset": 2048, 00:07:46.051 "data_size": 63488 00:07:46.051 }, 00:07:46.051 { 00:07:46.051 "name": "pt2", 00:07:46.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.051 "is_configured": true, 00:07:46.051 "data_offset": 2048, 00:07:46.051 "data_size": 63488 00:07:46.051 }, 00:07:46.051 { 00:07:46.051 "name": "pt3", 00:07:46.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:46.051 "is_configured": true, 00:07:46.051 "data_offset": 2048, 00:07:46.051 "data_size": 63488 00:07:46.051 } 00:07:46.051 ] 00:07:46.051 }' 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.051 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.620 [2024-11-02 23:47:40.520490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.620 "name": "raid_bdev1", 00:07:46.620 "aliases": [ 00:07:46.620 "bb1aba98-659b-4024-87f8-8bf4deb4d495" 00:07:46.620 ], 00:07:46.620 "product_name": "Raid Volume", 00:07:46.620 "block_size": 512, 00:07:46.620 "num_blocks": 190464, 00:07:46.620 "uuid": "bb1aba98-659b-4024-87f8-8bf4deb4d495", 00:07:46.620 "assigned_rate_limits": { 00:07:46.620 "rw_ios_per_sec": 0, 00:07:46.620 "rw_mbytes_per_sec": 0, 00:07:46.620 "r_mbytes_per_sec": 0, 00:07:46.620 "w_mbytes_per_sec": 0 00:07:46.620 }, 00:07:46.620 "claimed": false, 00:07:46.620 "zoned": false, 00:07:46.620 "supported_io_types": { 00:07:46.620 "read": true, 00:07:46.620 "write": true, 00:07:46.620 "unmap": true, 00:07:46.620 "flush": true, 00:07:46.620 "reset": true, 00:07:46.620 "nvme_admin": false, 00:07:46.620 "nvme_io": false, 00:07:46.620 "nvme_io_md": false, 00:07:46.620 "write_zeroes": true, 00:07:46.620 "zcopy": false, 00:07:46.620 "get_zone_info": false, 00:07:46.620 "zone_management": false, 00:07:46.620 "zone_append": false, 00:07:46.620 "compare": false, 00:07:46.620 "compare_and_write": false, 00:07:46.620 "abort": false, 00:07:46.620 "seek_hole": false, 00:07:46.620 "seek_data": false, 00:07:46.620 "copy": false, 00:07:46.620 "nvme_iov_md": false 00:07:46.620 }, 00:07:46.620 "memory_domains": [ 00:07:46.620 { 00:07:46.620 "dma_device_id": "system", 00:07:46.620 "dma_device_type": 1 00:07:46.620 }, 00:07:46.620 { 00:07:46.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.620 "dma_device_type": 2 00:07:46.620 }, 00:07:46.620 { 00:07:46.620 "dma_device_id": "system", 00:07:46.620 "dma_device_type": 1 00:07:46.620 }, 00:07:46.620 { 00:07:46.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.620 "dma_device_type": 2 00:07:46.620 }, 00:07:46.620 { 00:07:46.620 "dma_device_id": "system", 00:07:46.620 "dma_device_type": 1 00:07:46.620 }, 00:07:46.620 { 00:07:46.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.620 "dma_device_type": 2 00:07:46.620 } 00:07:46.620 ], 00:07:46.620 "driver_specific": { 00:07:46.620 "raid": { 00:07:46.620 "uuid": "bb1aba98-659b-4024-87f8-8bf4deb4d495", 00:07:46.620 "strip_size_kb": 64, 00:07:46.620 "state": "online", 00:07:46.620 "raid_level": "raid0", 00:07:46.620 "superblock": true, 00:07:46.620 "num_base_bdevs": 3, 00:07:46.620 "num_base_bdevs_discovered": 3, 00:07:46.620 "num_base_bdevs_operational": 3, 00:07:46.620 "base_bdevs_list": [ 00:07:46.620 { 00:07:46.620 "name": "pt1", 00:07:46.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.620 "is_configured": true, 00:07:46.620 "data_offset": 2048, 00:07:46.620 "data_size": 63488 00:07:46.620 }, 00:07:46.620 { 00:07:46.620 "name": "pt2", 00:07:46.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.620 "is_configured": true, 00:07:46.620 "data_offset": 2048, 00:07:46.620 "data_size": 63488 00:07:46.620 }, 00:07:46.620 { 00:07:46.620 "name": "pt3", 00:07:46.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:46.620 "is_configured": true, 00:07:46.620 "data_offset": 2048, 00:07:46.620 "data_size": 63488 00:07:46.620 } 00:07:46.620 ] 00:07:46.620 } 00:07:46.620 } 00:07:46.620 }' 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:46.620 pt2 00:07:46.620 pt3' 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.620 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.884 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.884 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.884 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.884 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 [2024-11-02 23:47:40.795965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bb1aba98-659b-4024-87f8-8bf4deb4d495 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bb1aba98-659b-4024-87f8-8bf4deb4d495 ']' 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 [2024-11-02 23:47:40.839628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.885 [2024-11-02 23:47:40.839672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.885 [2024-11-02 23:47:40.839754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.885 [2024-11-02 23:47:40.839820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.885 [2024-11-02 23:47:40.839834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 [2024-11-02 23:47:40.975433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:46.885 [2024-11-02 23:47:40.977343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:46.885 [2024-11-02 23:47:40.977395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:46.885 [2024-11-02 23:47:40.977445] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:47.163 [2024-11-02 23:47:40.977493] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:47.163 [2024-11-02 23:47:40.977527] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:47.163 [2024-11-02 23:47:40.977540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.163 [2024-11-02 23:47:40.977551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:47.163 request: 00:07:47.163 { 00:07:47.163 "name": "raid_bdev1", 00:07:47.163 "raid_level": "raid0", 00:07:47.163 "base_bdevs": [ 00:07:47.163 "malloc1", 00:07:47.163 "malloc2", 00:07:47.163 "malloc3" 00:07:47.163 ], 00:07:47.163 "strip_size_kb": 64, 00:07:47.163 "superblock": false, 00:07:47.163 "method": "bdev_raid_create", 00:07:47.163 "req_id": 1 00:07:47.163 } 00:07:47.163 Got JSON-RPC error response 00:07:47.163 response: 00:07:47.163 { 00:07:47.163 "code": -17, 00:07:47.163 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:47.163 } 00:07:47.163 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:47.163 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:47.163 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.163 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.163 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.163 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.163 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.163 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.163 23:47:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:47.163 23:47:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.163 [2024-11-02 23:47:41.043272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:47.163 [2024-11-02 23:47:41.043321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.163 [2024-11-02 23:47:41.043336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:47.163 [2024-11-02 23:47:41.043346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.163 [2024-11-02 23:47:41.045449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.163 [2024-11-02 23:47:41.045486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:47.163 [2024-11-02 23:47:41.045554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:47.163 [2024-11-02 23:47:41.045603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:47.163 pt1 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.163 "name": "raid_bdev1", 00:07:47.163 "uuid": "bb1aba98-659b-4024-87f8-8bf4deb4d495", 00:07:47.163 "strip_size_kb": 64, 00:07:47.163 "state": "configuring", 00:07:47.163 "raid_level": "raid0", 00:07:47.163 "superblock": true, 00:07:47.163 "num_base_bdevs": 3, 00:07:47.163 "num_base_bdevs_discovered": 1, 00:07:47.163 "num_base_bdevs_operational": 3, 00:07:47.163 "base_bdevs_list": [ 00:07:47.163 { 00:07:47.163 "name": "pt1", 00:07:47.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.163 "is_configured": true, 00:07:47.163 "data_offset": 2048, 00:07:47.163 "data_size": 63488 00:07:47.163 }, 00:07:47.163 { 00:07:47.163 "name": null, 00:07:47.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.163 "is_configured": false, 00:07:47.163 "data_offset": 2048, 00:07:47.163 "data_size": 63488 00:07:47.163 }, 00:07:47.163 { 00:07:47.163 "name": null, 00:07:47.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:47.163 "is_configured": false, 00:07:47.163 "data_offset": 2048, 00:07:47.163 "data_size": 63488 00:07:47.163 } 00:07:47.163 ] 00:07:47.163 }' 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.163 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.463 [2024-11-02 23:47:41.490589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.463 [2024-11-02 23:47:41.490714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.463 [2024-11-02 23:47:41.490780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:47.463 [2024-11-02 23:47:41.490849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.463 [2024-11-02 23:47:41.491276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.463 [2024-11-02 23:47:41.491338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.463 [2024-11-02 23:47:41.491447] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:47.463 [2024-11-02 23:47:41.491515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.463 pt2 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.463 [2024-11-02 23:47:41.502565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.463 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.463 "name": "raid_bdev1", 00:07:47.463 "uuid": "bb1aba98-659b-4024-87f8-8bf4deb4d495", 00:07:47.463 "strip_size_kb": 64, 00:07:47.463 "state": "configuring", 00:07:47.463 "raid_level": "raid0", 00:07:47.464 "superblock": true, 00:07:47.464 "num_base_bdevs": 3, 00:07:47.464 "num_base_bdevs_discovered": 1, 00:07:47.464 "num_base_bdevs_operational": 3, 00:07:47.464 "base_bdevs_list": [ 00:07:47.464 { 00:07:47.464 "name": "pt1", 00:07:47.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.464 "is_configured": true, 00:07:47.464 "data_offset": 2048, 00:07:47.464 "data_size": 63488 00:07:47.464 }, 00:07:47.464 { 00:07:47.464 "name": null, 00:07:47.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.464 "is_configured": false, 00:07:47.464 "data_offset": 0, 00:07:47.464 "data_size": 63488 00:07:47.464 }, 00:07:47.464 { 00:07:47.464 "name": null, 00:07:47.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:47.464 "is_configured": false, 00:07:47.464 "data_offset": 2048, 00:07:47.464 "data_size": 63488 00:07:47.464 } 00:07:47.464 ] 00:07:47.464 }' 00:07:47.464 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.464 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 [2024-11-02 23:47:41.965867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:48.033 [2024-11-02 23:47:41.965970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.033 [2024-11-02 23:47:41.966009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:48.033 [2024-11-02 23:47:41.966057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.033 [2024-11-02 23:47:41.966533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.033 [2024-11-02 23:47:41.966595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:48.033 [2024-11-02 23:47:41.966704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:48.033 [2024-11-02 23:47:41.966768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:48.033 pt2 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 [2024-11-02 23:47:41.977838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:48.033 [2024-11-02 23:47:41.977914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.033 [2024-11-02 23:47:41.977946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:48.033 [2024-11-02 23:47:41.977971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.033 [2024-11-02 23:47:41.978312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.033 [2024-11-02 23:47:41.978373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:48.033 [2024-11-02 23:47:41.978469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:48.033 [2024-11-02 23:47:41.978517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:48.033 [2024-11-02 23:47:41.978634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:48.033 [2024-11-02 23:47:41.978672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:48.033 [2024-11-02 23:47:41.978933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:48.033 [2024-11-02 23:47:41.979078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:48.033 [2024-11-02 23:47:41.979120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:48.033 [2024-11-02 23:47:41.979255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.033 pt3 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.033 23:47:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.034 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.034 "name": "raid_bdev1", 00:07:48.034 "uuid": "bb1aba98-659b-4024-87f8-8bf4deb4d495", 00:07:48.034 "strip_size_kb": 64, 00:07:48.034 "state": "online", 00:07:48.034 "raid_level": "raid0", 00:07:48.034 "superblock": true, 00:07:48.034 "num_base_bdevs": 3, 00:07:48.034 "num_base_bdevs_discovered": 3, 00:07:48.034 "num_base_bdevs_operational": 3, 00:07:48.034 "base_bdevs_list": [ 00:07:48.034 { 00:07:48.034 "name": "pt1", 00:07:48.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.034 "is_configured": true, 00:07:48.034 "data_offset": 2048, 00:07:48.034 "data_size": 63488 00:07:48.034 }, 00:07:48.034 { 00:07:48.034 "name": "pt2", 00:07:48.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.034 "is_configured": true, 00:07:48.034 "data_offset": 2048, 00:07:48.034 "data_size": 63488 00:07:48.034 }, 00:07:48.034 { 00:07:48.034 "name": "pt3", 00:07:48.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:48.034 "is_configured": true, 00:07:48.034 "data_offset": 2048, 00:07:48.034 "data_size": 63488 00:07:48.034 } 00:07:48.034 ] 00:07:48.034 }' 00:07:48.034 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.034 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.609 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:48.609 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:48.609 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.610 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.610 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.610 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.610 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.610 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.610 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.610 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.610 [2024-11-02 23:47:42.421377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.610 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.610 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.610 "name": "raid_bdev1", 00:07:48.610 "aliases": [ 00:07:48.610 "bb1aba98-659b-4024-87f8-8bf4deb4d495" 00:07:48.610 ], 00:07:48.610 "product_name": "Raid Volume", 00:07:48.610 "block_size": 512, 00:07:48.610 "num_blocks": 190464, 00:07:48.610 "uuid": "bb1aba98-659b-4024-87f8-8bf4deb4d495", 00:07:48.610 "assigned_rate_limits": { 00:07:48.610 "rw_ios_per_sec": 0, 00:07:48.610 "rw_mbytes_per_sec": 0, 00:07:48.610 "r_mbytes_per_sec": 0, 00:07:48.610 "w_mbytes_per_sec": 0 00:07:48.610 }, 00:07:48.610 "claimed": false, 00:07:48.610 "zoned": false, 00:07:48.610 "supported_io_types": { 00:07:48.610 "read": true, 00:07:48.610 "write": true, 00:07:48.610 "unmap": true, 00:07:48.610 "flush": true, 00:07:48.610 "reset": true, 00:07:48.610 "nvme_admin": false, 00:07:48.610 "nvme_io": false, 00:07:48.610 "nvme_io_md": false, 00:07:48.610 "write_zeroes": true, 00:07:48.610 "zcopy": false, 00:07:48.610 "get_zone_info": false, 00:07:48.610 "zone_management": false, 00:07:48.610 "zone_append": false, 00:07:48.610 "compare": false, 00:07:48.610 "compare_and_write": false, 00:07:48.610 "abort": false, 00:07:48.610 "seek_hole": false, 00:07:48.610 "seek_data": false, 00:07:48.610 "copy": false, 00:07:48.610 "nvme_iov_md": false 00:07:48.610 }, 00:07:48.610 "memory_domains": [ 00:07:48.610 { 00:07:48.610 "dma_device_id": "system", 00:07:48.610 "dma_device_type": 1 00:07:48.610 }, 00:07:48.610 { 00:07:48.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.610 "dma_device_type": 2 00:07:48.610 }, 00:07:48.610 { 00:07:48.610 "dma_device_id": "system", 00:07:48.611 "dma_device_type": 1 00:07:48.611 }, 00:07:48.611 { 00:07:48.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.611 "dma_device_type": 2 00:07:48.611 }, 00:07:48.611 { 00:07:48.611 "dma_device_id": "system", 00:07:48.611 "dma_device_type": 1 00:07:48.611 }, 00:07:48.611 { 00:07:48.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.611 "dma_device_type": 2 00:07:48.611 } 00:07:48.611 ], 00:07:48.611 "driver_specific": { 00:07:48.611 "raid": { 00:07:48.611 "uuid": "bb1aba98-659b-4024-87f8-8bf4deb4d495", 00:07:48.611 "strip_size_kb": 64, 00:07:48.611 "state": "online", 00:07:48.611 "raid_level": "raid0", 00:07:48.611 "superblock": true, 00:07:48.611 "num_base_bdevs": 3, 00:07:48.611 "num_base_bdevs_discovered": 3, 00:07:48.611 "num_base_bdevs_operational": 3, 00:07:48.611 "base_bdevs_list": [ 00:07:48.611 { 00:07:48.611 "name": "pt1", 00:07:48.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.611 "is_configured": true, 00:07:48.611 "data_offset": 2048, 00:07:48.611 "data_size": 63488 00:07:48.611 }, 00:07:48.611 { 00:07:48.611 "name": "pt2", 00:07:48.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.611 "is_configured": true, 00:07:48.611 "data_offset": 2048, 00:07:48.611 "data_size": 63488 00:07:48.611 }, 00:07:48.611 { 00:07:48.611 "name": "pt3", 00:07:48.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:48.611 "is_configured": true, 00:07:48.611 "data_offset": 2048, 00:07:48.611 "data_size": 63488 00:07:48.611 } 00:07:48.611 ] 00:07:48.611 } 00:07:48.611 } 00:07:48.611 }' 00:07:48.611 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.611 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:48.611 pt2 00:07:48.611 pt3' 00:07:48.611 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.611 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.611 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.611 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.612 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.613 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.613 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:48.613 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.613 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.613 [2024-11-02 23:47:42.672892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.613 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.875 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bb1aba98-659b-4024-87f8-8bf4deb4d495 '!=' bb1aba98-659b-4024-87f8-8bf4deb4d495 ']' 00:07:48.875 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:48.875 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.875 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.875 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76055 00:07:48.875 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 76055 ']' 00:07:48.875 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 76055 00:07:48.875 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:48.875 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:48.876 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76055 00:07:48.876 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:48.876 killing process with pid 76055 00:07:48.876 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:48.876 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76055' 00:07:48.876 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 76055 00:07:48.876 [2024-11-02 23:47:42.743971] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.876 [2024-11-02 23:47:42.744067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.876 [2024-11-02 23:47:42.744130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.876 [2024-11-02 23:47:42.744141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:48.876 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 76055 00:07:48.876 [2024-11-02 23:47:42.777816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.136 ************************************ 00:07:49.136 END TEST raid_superblock_test 00:07:49.136 ************************************ 00:07:49.136 23:47:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:49.136 00:07:49.136 real 0m4.018s 00:07:49.136 user 0m6.303s 00:07:49.136 sys 0m0.962s 00:07:49.136 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.136 23:47:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.136 23:47:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:07:49.136 23:47:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:49.136 23:47:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.136 23:47:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.136 ************************************ 00:07:49.136 START TEST raid_read_error_test 00:07:49.136 ************************************ 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pKtnJ2V1Ij 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76298 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76298 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 76298 ']' 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.136 23:47:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.136 [2024-11-02 23:47:43.180915] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:49.136 [2024-11-02 23:47:43.181038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76298 ] 00:07:49.400 [2024-11-02 23:47:43.336384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.401 [2024-11-02 23:47:43.362550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.401 [2024-11-02 23:47:43.405062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.401 [2024-11-02 23:47:43.405096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.971 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.971 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:49.971 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.971 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:49.971 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.971 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 BaseBdev1_malloc 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 true 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 [2024-11-02 23:47:44.091354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:50.231 [2024-11-02 23:47:44.091407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.231 [2024-11-02 23:47:44.091427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:50.231 [2024-11-02 23:47:44.091436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.231 [2024-11-02 23:47:44.093614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.231 [2024-11-02 23:47:44.093654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:50.231 BaseBdev1 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 BaseBdev2_malloc 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 true 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 [2024-11-02 23:47:44.135772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:50.231 [2024-11-02 23:47:44.135823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.231 [2024-11-02 23:47:44.135840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:50.231 [2024-11-02 23:47:44.135856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.231 [2024-11-02 23:47:44.137873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.231 [2024-11-02 23:47:44.137906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:50.231 BaseBdev2 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 BaseBdev3_malloc 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 true 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 [2024-11-02 23:47:44.176077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:50.231 [2024-11-02 23:47:44.176123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.231 [2024-11-02 23:47:44.176141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:50.231 [2024-11-02 23:47:44.176150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.231 [2024-11-02 23:47:44.178188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.231 [2024-11-02 23:47:44.178223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:50.231 BaseBdev3 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 [2024-11-02 23:47:44.188132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.231 [2024-11-02 23:47:44.189985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.231 [2024-11-02 23:47:44.190062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:50.231 [2024-11-02 23:47:44.190236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:50.231 [2024-11-02 23:47:44.190256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:50.231 [2024-11-02 23:47:44.190521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:50.231 [2024-11-02 23:47:44.190669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:50.231 [2024-11-02 23:47:44.190693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:50.231 [2024-11-02 23:47:44.190840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.231 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.232 "name": "raid_bdev1", 00:07:50.232 "uuid": "6d71978a-6a35-442d-8e97-e796bff594f4", 00:07:50.232 "strip_size_kb": 64, 00:07:50.232 "state": "online", 00:07:50.232 "raid_level": "raid0", 00:07:50.232 "superblock": true, 00:07:50.232 "num_base_bdevs": 3, 00:07:50.232 "num_base_bdevs_discovered": 3, 00:07:50.232 "num_base_bdevs_operational": 3, 00:07:50.232 "base_bdevs_list": [ 00:07:50.232 { 00:07:50.232 "name": "BaseBdev1", 00:07:50.232 "uuid": "8b87b5d6-86e1-574c-b19d-259ba63e831d", 00:07:50.232 "is_configured": true, 00:07:50.232 "data_offset": 2048, 00:07:50.232 "data_size": 63488 00:07:50.232 }, 00:07:50.232 { 00:07:50.232 "name": "BaseBdev2", 00:07:50.232 "uuid": "9712f5d9-f339-5b17-834e-8f9e77ac902d", 00:07:50.232 "is_configured": true, 00:07:50.232 "data_offset": 2048, 00:07:50.232 "data_size": 63488 00:07:50.232 }, 00:07:50.232 { 00:07:50.232 "name": "BaseBdev3", 00:07:50.232 "uuid": "6b9fb537-cdf2-5b46-a22a-08b5fb771c70", 00:07:50.232 "is_configured": true, 00:07:50.232 "data_offset": 2048, 00:07:50.232 "data_size": 63488 00:07:50.232 } 00:07:50.232 ] 00:07:50.232 }' 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.232 23:47:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.801 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:50.801 23:47:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:50.801 [2024-11-02 23:47:44.763543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:51.740 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:51.740 23:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.740 23:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.740 23:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.740 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:51.740 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:51.740 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:51.740 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:51.740 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.741 "name": "raid_bdev1", 00:07:51.741 "uuid": "6d71978a-6a35-442d-8e97-e796bff594f4", 00:07:51.741 "strip_size_kb": 64, 00:07:51.741 "state": "online", 00:07:51.741 "raid_level": "raid0", 00:07:51.741 "superblock": true, 00:07:51.741 "num_base_bdevs": 3, 00:07:51.741 "num_base_bdevs_discovered": 3, 00:07:51.741 "num_base_bdevs_operational": 3, 00:07:51.741 "base_bdevs_list": [ 00:07:51.741 { 00:07:51.741 "name": "BaseBdev1", 00:07:51.741 "uuid": "8b87b5d6-86e1-574c-b19d-259ba63e831d", 00:07:51.741 "is_configured": true, 00:07:51.741 "data_offset": 2048, 00:07:51.741 "data_size": 63488 00:07:51.741 }, 00:07:51.741 { 00:07:51.741 "name": "BaseBdev2", 00:07:51.741 "uuid": "9712f5d9-f339-5b17-834e-8f9e77ac902d", 00:07:51.741 "is_configured": true, 00:07:51.741 "data_offset": 2048, 00:07:51.741 "data_size": 63488 00:07:51.741 }, 00:07:51.741 { 00:07:51.741 "name": "BaseBdev3", 00:07:51.741 "uuid": "6b9fb537-cdf2-5b46-a22a-08b5fb771c70", 00:07:51.741 "is_configured": true, 00:07:51.741 "data_offset": 2048, 00:07:51.741 "data_size": 63488 00:07:51.741 } 00:07:51.741 ] 00:07:51.741 }' 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.741 23:47:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.310 23:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.310 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.311 [2024-11-02 23:47:46.110808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.311 [2024-11-02 23:47:46.110845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.311 [2024-11-02 23:47:46.113596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.311 [2024-11-02 23:47:46.113647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.311 [2024-11-02 23:47:46.113683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.311 [2024-11-02 23:47:46.113701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:52.311 { 00:07:52.311 "results": [ 00:07:52.311 { 00:07:52.311 "job": "raid_bdev1", 00:07:52.311 "core_mask": "0x1", 00:07:52.311 "workload": "randrw", 00:07:52.311 "percentage": 50, 00:07:52.311 "status": "finished", 00:07:52.311 "queue_depth": 1, 00:07:52.311 "io_size": 131072, 00:07:52.311 "runtime": 1.348077, 00:07:52.311 "iops": 16306.190225039074, 00:07:52.311 "mibps": 2038.2737781298842, 00:07:52.311 "io_failed": 1, 00:07:52.311 "io_timeout": 0, 00:07:52.311 "avg_latency_us": 85.2094187112034, 00:07:52.311 "min_latency_us": 24.593886462882097, 00:07:52.311 "max_latency_us": 1373.6803493449781 00:07:52.311 } 00:07:52.311 ], 00:07:52.311 "core_count": 1 00:07:52.311 } 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76298 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 76298 ']' 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 76298 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76298 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76298' 00:07:52.311 killing process with pid 76298 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 76298 00:07:52.311 [2024-11-02 23:47:46.156268] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 76298 00:07:52.311 [2024-11-02 23:47:46.181205] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pKtnJ2V1Ij 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:52.311 00:07:52.311 real 0m3.317s 00:07:52.311 user 0m4.283s 00:07:52.311 sys 0m0.516s 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.311 23:47:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.311 ************************************ 00:07:52.311 END TEST raid_read_error_test 00:07:52.311 ************************************ 00:07:52.570 23:47:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:07:52.570 23:47:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:52.570 23:47:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.570 23:47:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.570 ************************************ 00:07:52.570 START TEST raid_write_error_test 00:07:52.570 ************************************ 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.570 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xguVeiAjnP 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76427 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76427 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 76427 ']' 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:52.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:52.571 23:47:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.571 [2024-11-02 23:47:46.563299] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:52.571 [2024-11-02 23:47:46.563429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76427 ] 00:07:52.838 [2024-11-02 23:47:46.720371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.838 [2024-11-02 23:47:46.746372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.838 [2024-11-02 23:47:46.787887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.838 [2024-11-02 23:47:46.787921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.423 BaseBdev1_malloc 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.423 true 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.423 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.423 [2024-11-02 23:47:47.457242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:53.423 [2024-11-02 23:47:47.457300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.424 [2024-11-02 23:47:47.457321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:53.424 [2024-11-02 23:47:47.457330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.424 [2024-11-02 23:47:47.459482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.424 [2024-11-02 23:47:47.459520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:53.424 BaseBdev1 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.424 BaseBdev2_malloc 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.424 true 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.424 [2024-11-02 23:47:47.497593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:53.424 [2024-11-02 23:47:47.497640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.424 [2024-11-02 23:47:47.497657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:53.424 [2024-11-02 23:47:47.497673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.424 [2024-11-02 23:47:47.499939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.424 [2024-11-02 23:47:47.499981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:53.424 BaseBdev2 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.424 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.683 BaseBdev3_malloc 00:07:53.683 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.683 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:53.683 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.683 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.683 true 00:07:53.683 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.683 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:53.683 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.683 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.683 [2024-11-02 23:47:47.538046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:53.683 [2024-11-02 23:47:47.538092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.683 [2024-11-02 23:47:47.538110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:53.683 [2024-11-02 23:47:47.538119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.684 [2024-11-02 23:47:47.540198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.684 [2024-11-02 23:47:47.540233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:53.684 BaseBdev3 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.684 [2024-11-02 23:47:47.550095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.684 [2024-11-02 23:47:47.551957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.684 [2024-11-02 23:47:47.552045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:53.684 [2024-11-02 23:47:47.552200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:53.684 [2024-11-02 23:47:47.552214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:53.684 [2024-11-02 23:47:47.552454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:53.684 [2024-11-02 23:47:47.552586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:53.684 [2024-11-02 23:47:47.552601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:53.684 [2024-11-02 23:47:47.552736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.684 "name": "raid_bdev1", 00:07:53.684 "uuid": "d70ab8f7-93c9-42bf-b8f3-3a48f36bc480", 00:07:53.684 "strip_size_kb": 64, 00:07:53.684 "state": "online", 00:07:53.684 "raid_level": "raid0", 00:07:53.684 "superblock": true, 00:07:53.684 "num_base_bdevs": 3, 00:07:53.684 "num_base_bdevs_discovered": 3, 00:07:53.684 "num_base_bdevs_operational": 3, 00:07:53.684 "base_bdevs_list": [ 00:07:53.684 { 00:07:53.684 "name": "BaseBdev1", 00:07:53.684 "uuid": "05e47d01-e3ce-566e-a4a5-548c06dbbb39", 00:07:53.684 "is_configured": true, 00:07:53.684 "data_offset": 2048, 00:07:53.684 "data_size": 63488 00:07:53.684 }, 00:07:53.684 { 00:07:53.684 "name": "BaseBdev2", 00:07:53.684 "uuid": "6988e1e6-ad27-57d2-98ca-e6b893a01394", 00:07:53.684 "is_configured": true, 00:07:53.684 "data_offset": 2048, 00:07:53.684 "data_size": 63488 00:07:53.684 }, 00:07:53.684 { 00:07:53.684 "name": "BaseBdev3", 00:07:53.684 "uuid": "51347ba6-551a-56f3-9bd6-95d1cfca9c95", 00:07:53.684 "is_configured": true, 00:07:53.684 "data_offset": 2048, 00:07:53.684 "data_size": 63488 00:07:53.684 } 00:07:53.684 ] 00:07:53.684 }' 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.684 23:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.943 23:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:53.943 23:47:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:54.202 [2024-11-02 23:47:48.113515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.138 "name": "raid_bdev1", 00:07:55.138 "uuid": "d70ab8f7-93c9-42bf-b8f3-3a48f36bc480", 00:07:55.138 "strip_size_kb": 64, 00:07:55.138 "state": "online", 00:07:55.138 "raid_level": "raid0", 00:07:55.138 "superblock": true, 00:07:55.138 "num_base_bdevs": 3, 00:07:55.138 "num_base_bdevs_discovered": 3, 00:07:55.138 "num_base_bdevs_operational": 3, 00:07:55.138 "base_bdevs_list": [ 00:07:55.138 { 00:07:55.138 "name": "BaseBdev1", 00:07:55.138 "uuid": "05e47d01-e3ce-566e-a4a5-548c06dbbb39", 00:07:55.138 "is_configured": true, 00:07:55.138 "data_offset": 2048, 00:07:55.138 "data_size": 63488 00:07:55.138 }, 00:07:55.138 { 00:07:55.138 "name": "BaseBdev2", 00:07:55.138 "uuid": "6988e1e6-ad27-57d2-98ca-e6b893a01394", 00:07:55.138 "is_configured": true, 00:07:55.138 "data_offset": 2048, 00:07:55.138 "data_size": 63488 00:07:55.138 }, 00:07:55.138 { 00:07:55.138 "name": "BaseBdev3", 00:07:55.138 "uuid": "51347ba6-551a-56f3-9bd6-95d1cfca9c95", 00:07:55.138 "is_configured": true, 00:07:55.138 "data_offset": 2048, 00:07:55.138 "data_size": 63488 00:07:55.138 } 00:07:55.138 ] 00:07:55.138 }' 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.138 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.707 [2024-11-02 23:47:49.513576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.707 [2024-11-02 23:47:49.513614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.707 [2024-11-02 23:47:49.516179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.707 [2024-11-02 23:47:49.516229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.707 [2024-11-02 23:47:49.516264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.707 [2024-11-02 23:47:49.516275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:55.707 { 00:07:55.707 "results": [ 00:07:55.707 { 00:07:55.707 "job": "raid_bdev1", 00:07:55.707 "core_mask": "0x1", 00:07:55.707 "workload": "randrw", 00:07:55.707 "percentage": 50, 00:07:55.707 "status": "finished", 00:07:55.707 "queue_depth": 1, 00:07:55.707 "io_size": 131072, 00:07:55.707 "runtime": 1.400915, 00:07:55.707 "iops": 16501.358040994637, 00:07:55.707 "mibps": 2062.6697551243296, 00:07:55.707 "io_failed": 1, 00:07:55.707 "io_timeout": 0, 00:07:55.707 "avg_latency_us": 84.0286865449369, 00:07:55.707 "min_latency_us": 19.227947598253277, 00:07:55.707 "max_latency_us": 1445.2262008733624 00:07:55.707 } 00:07:55.707 ], 00:07:55.707 "core_count": 1 00:07:55.707 } 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76427 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 76427 ']' 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 76427 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76427 00:07:55.707 killing process with pid 76427 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76427' 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 76427 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 76427 00:07:55.707 [2024-11-02 23:47:49.564781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.707 [2024-11-02 23:47:49.590659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xguVeiAjnP 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:55.707 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:55.967 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:55.967 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.967 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:55.967 23:47:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:55.967 00:07:55.967 real 0m3.337s 00:07:55.967 user 0m4.321s 00:07:55.967 sys 0m0.500s 00:07:55.967 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.967 23:47:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.967 ************************************ 00:07:55.967 END TEST raid_write_error_test 00:07:55.967 ************************************ 00:07:55.967 23:47:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:55.967 23:47:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:07:55.967 23:47:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:55.967 23:47:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.967 23:47:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.967 ************************************ 00:07:55.967 START TEST raid_state_function_test 00:07:55.967 ************************************ 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76554 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.967 Process raid pid: 76554 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76554' 00:07:55.967 23:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76554 00:07:55.968 23:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 76554 ']' 00:07:55.968 23:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.968 23:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.968 23:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.968 23:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.968 23:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.968 [2024-11-02 23:47:49.969023] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:07:55.968 [2024-11-02 23:47:49.969151] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.226 [2024-11-02 23:47:50.122978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.226 [2024-11-02 23:47:50.152122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.226 [2024-11-02 23:47:50.193810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.226 [2024-11-02 23:47:50.193845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.820 23:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:56.820 23:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:56.820 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:56.820 23:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.820 23:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.820 [2024-11-02 23:47:50.803305] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.820 [2024-11-02 23:47:50.803358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.820 [2024-11-02 23:47:50.803376] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.820 [2024-11-02 23:47:50.803386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.820 [2024-11-02 23:47:50.803393] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:56.820 [2024-11-02 23:47:50.803403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:56.820 23:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.820 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:56.820 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.821 "name": "Existed_Raid", 00:07:56.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.821 "strip_size_kb": 64, 00:07:56.821 "state": "configuring", 00:07:56.821 "raid_level": "concat", 00:07:56.821 "superblock": false, 00:07:56.821 "num_base_bdevs": 3, 00:07:56.821 "num_base_bdevs_discovered": 0, 00:07:56.821 "num_base_bdevs_operational": 3, 00:07:56.821 "base_bdevs_list": [ 00:07:56.821 { 00:07:56.821 "name": "BaseBdev1", 00:07:56.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.821 "is_configured": false, 00:07:56.821 "data_offset": 0, 00:07:56.821 "data_size": 0 00:07:56.821 }, 00:07:56.821 { 00:07:56.821 "name": "BaseBdev2", 00:07:56.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.821 "is_configured": false, 00:07:56.821 "data_offset": 0, 00:07:56.821 "data_size": 0 00:07:56.821 }, 00:07:56.821 { 00:07:56.821 "name": "BaseBdev3", 00:07:56.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.821 "is_configured": false, 00:07:56.821 "data_offset": 0, 00:07:56.821 "data_size": 0 00:07:56.821 } 00:07:56.821 ] 00:07:56.821 }' 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.821 23:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.144 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.144 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.144 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.144 [2024-11-02 23:47:51.198594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.144 [2024-11-02 23:47:51.198637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:57.144 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.144 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.144 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.144 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.144 [2024-11-02 23:47:51.210593] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.144 [2024-11-02 23:47:51.210635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.144 [2024-11-02 23:47:51.210644] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.145 [2024-11-02 23:47:51.210653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.145 [2024-11-02 23:47:51.210659] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.145 [2024-11-02 23:47:51.210668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.145 [2024-11-02 23:47:51.231212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.145 BaseBdev1 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.145 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.404 [ 00:07:57.404 { 00:07:57.404 "name": "BaseBdev1", 00:07:57.404 "aliases": [ 00:07:57.404 "48c8d251-d39b-4bde-b033-258d9284f35d" 00:07:57.404 ], 00:07:57.404 "product_name": "Malloc disk", 00:07:57.404 "block_size": 512, 00:07:57.404 "num_blocks": 65536, 00:07:57.404 "uuid": "48c8d251-d39b-4bde-b033-258d9284f35d", 00:07:57.404 "assigned_rate_limits": { 00:07:57.404 "rw_ios_per_sec": 0, 00:07:57.404 "rw_mbytes_per_sec": 0, 00:07:57.404 "r_mbytes_per_sec": 0, 00:07:57.404 "w_mbytes_per_sec": 0 00:07:57.404 }, 00:07:57.404 "claimed": true, 00:07:57.404 "claim_type": "exclusive_write", 00:07:57.404 "zoned": false, 00:07:57.404 "supported_io_types": { 00:07:57.404 "read": true, 00:07:57.404 "write": true, 00:07:57.404 "unmap": true, 00:07:57.404 "flush": true, 00:07:57.404 "reset": true, 00:07:57.404 "nvme_admin": false, 00:07:57.404 "nvme_io": false, 00:07:57.404 "nvme_io_md": false, 00:07:57.404 "write_zeroes": true, 00:07:57.404 "zcopy": true, 00:07:57.404 "get_zone_info": false, 00:07:57.404 "zone_management": false, 00:07:57.404 "zone_append": false, 00:07:57.404 "compare": false, 00:07:57.404 "compare_and_write": false, 00:07:57.404 "abort": true, 00:07:57.404 "seek_hole": false, 00:07:57.404 "seek_data": false, 00:07:57.404 "copy": true, 00:07:57.404 "nvme_iov_md": false 00:07:57.404 }, 00:07:57.404 "memory_domains": [ 00:07:57.404 { 00:07:57.404 "dma_device_id": "system", 00:07:57.404 "dma_device_type": 1 00:07:57.404 }, 00:07:57.404 { 00:07:57.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.404 "dma_device_type": 2 00:07:57.404 } 00:07:57.404 ], 00:07:57.404 "driver_specific": {} 00:07:57.404 } 00:07:57.404 ] 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.404 "name": "Existed_Raid", 00:07:57.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.404 "strip_size_kb": 64, 00:07:57.404 "state": "configuring", 00:07:57.404 "raid_level": "concat", 00:07:57.404 "superblock": false, 00:07:57.404 "num_base_bdevs": 3, 00:07:57.404 "num_base_bdevs_discovered": 1, 00:07:57.404 "num_base_bdevs_operational": 3, 00:07:57.404 "base_bdevs_list": [ 00:07:57.404 { 00:07:57.404 "name": "BaseBdev1", 00:07:57.404 "uuid": "48c8d251-d39b-4bde-b033-258d9284f35d", 00:07:57.404 "is_configured": true, 00:07:57.404 "data_offset": 0, 00:07:57.404 "data_size": 65536 00:07:57.404 }, 00:07:57.404 { 00:07:57.404 "name": "BaseBdev2", 00:07:57.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.404 "is_configured": false, 00:07:57.404 "data_offset": 0, 00:07:57.404 "data_size": 0 00:07:57.404 }, 00:07:57.404 { 00:07:57.404 "name": "BaseBdev3", 00:07:57.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.404 "is_configured": false, 00:07:57.404 "data_offset": 0, 00:07:57.404 "data_size": 0 00:07:57.404 } 00:07:57.404 ] 00:07:57.404 }' 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.404 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.664 [2024-11-02 23:47:51.702451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.664 [2024-11-02 23:47:51.702507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.664 [2024-11-02 23:47:51.714471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.664 [2024-11-02 23:47:51.716317] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.664 [2024-11-02 23:47:51.716356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.664 [2024-11-02 23:47:51.716366] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.664 [2024-11-02 23:47:51.716376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.664 "name": "Existed_Raid", 00:07:57.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.664 "strip_size_kb": 64, 00:07:57.664 "state": "configuring", 00:07:57.664 "raid_level": "concat", 00:07:57.664 "superblock": false, 00:07:57.664 "num_base_bdevs": 3, 00:07:57.664 "num_base_bdevs_discovered": 1, 00:07:57.664 "num_base_bdevs_operational": 3, 00:07:57.664 "base_bdevs_list": [ 00:07:57.664 { 00:07:57.664 "name": "BaseBdev1", 00:07:57.664 "uuid": "48c8d251-d39b-4bde-b033-258d9284f35d", 00:07:57.664 "is_configured": true, 00:07:57.664 "data_offset": 0, 00:07:57.664 "data_size": 65536 00:07:57.664 }, 00:07:57.664 { 00:07:57.664 "name": "BaseBdev2", 00:07:57.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.664 "is_configured": false, 00:07:57.664 "data_offset": 0, 00:07:57.664 "data_size": 0 00:07:57.664 }, 00:07:57.664 { 00:07:57.664 "name": "BaseBdev3", 00:07:57.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.664 "is_configured": false, 00:07:57.664 "data_offset": 0, 00:07:57.664 "data_size": 0 00:07:57.664 } 00:07:57.664 ] 00:07:57.664 }' 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.664 23:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.233 BaseBdev2 00:07:58.233 [2024-11-02 23:47:52.136529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.233 [ 00:07:58.233 { 00:07:58.233 "name": "BaseBdev2", 00:07:58.233 "aliases": [ 00:07:58.233 "5a348c76-1995-41fd-8c9c-f985f85397aa" 00:07:58.233 ], 00:07:58.233 "product_name": "Malloc disk", 00:07:58.233 "block_size": 512, 00:07:58.233 "num_blocks": 65536, 00:07:58.233 "uuid": "5a348c76-1995-41fd-8c9c-f985f85397aa", 00:07:58.233 "assigned_rate_limits": { 00:07:58.233 "rw_ios_per_sec": 0, 00:07:58.233 "rw_mbytes_per_sec": 0, 00:07:58.233 "r_mbytes_per_sec": 0, 00:07:58.233 "w_mbytes_per_sec": 0 00:07:58.233 }, 00:07:58.233 "claimed": true, 00:07:58.233 "claim_type": "exclusive_write", 00:07:58.233 "zoned": false, 00:07:58.233 "supported_io_types": { 00:07:58.233 "read": true, 00:07:58.233 "write": true, 00:07:58.233 "unmap": true, 00:07:58.233 "flush": true, 00:07:58.233 "reset": true, 00:07:58.233 "nvme_admin": false, 00:07:58.233 "nvme_io": false, 00:07:58.233 "nvme_io_md": false, 00:07:58.233 "write_zeroes": true, 00:07:58.233 "zcopy": true, 00:07:58.233 "get_zone_info": false, 00:07:58.233 "zone_management": false, 00:07:58.233 "zone_append": false, 00:07:58.233 "compare": false, 00:07:58.233 "compare_and_write": false, 00:07:58.233 "abort": true, 00:07:58.233 "seek_hole": false, 00:07:58.233 "seek_data": false, 00:07:58.233 "copy": true, 00:07:58.233 "nvme_iov_md": false 00:07:58.233 }, 00:07:58.233 "memory_domains": [ 00:07:58.233 { 00:07:58.233 "dma_device_id": "system", 00:07:58.233 "dma_device_type": 1 00:07:58.233 }, 00:07:58.233 { 00:07:58.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.233 "dma_device_type": 2 00:07:58.233 } 00:07:58.233 ], 00:07:58.233 "driver_specific": {} 00:07:58.233 } 00:07:58.233 ] 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.233 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.234 "name": "Existed_Raid", 00:07:58.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.234 "strip_size_kb": 64, 00:07:58.234 "state": "configuring", 00:07:58.234 "raid_level": "concat", 00:07:58.234 "superblock": false, 00:07:58.234 "num_base_bdevs": 3, 00:07:58.234 "num_base_bdevs_discovered": 2, 00:07:58.234 "num_base_bdevs_operational": 3, 00:07:58.234 "base_bdevs_list": [ 00:07:58.234 { 00:07:58.234 "name": "BaseBdev1", 00:07:58.234 "uuid": "48c8d251-d39b-4bde-b033-258d9284f35d", 00:07:58.234 "is_configured": true, 00:07:58.234 "data_offset": 0, 00:07:58.234 "data_size": 65536 00:07:58.234 }, 00:07:58.234 { 00:07:58.234 "name": "BaseBdev2", 00:07:58.234 "uuid": "5a348c76-1995-41fd-8c9c-f985f85397aa", 00:07:58.234 "is_configured": true, 00:07:58.234 "data_offset": 0, 00:07:58.234 "data_size": 65536 00:07:58.234 }, 00:07:58.234 { 00:07:58.234 "name": "BaseBdev3", 00:07:58.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.234 "is_configured": false, 00:07:58.234 "data_offset": 0, 00:07:58.234 "data_size": 0 00:07:58.234 } 00:07:58.234 ] 00:07:58.234 }' 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.234 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.494 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:58.494 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.494 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.752 [2024-11-02 23:47:52.593110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:58.752 [2024-11-02 23:47:52.593151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:58.752 [2024-11-02 23:47:52.593161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:58.752 [2024-11-02 23:47:52.593447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:58.752 [2024-11-02 23:47:52.593590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:58.752 [2024-11-02 23:47:52.593612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:58.752 [2024-11-02 23:47:52.593846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.752 BaseBdev3 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.752 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.752 [ 00:07:58.752 { 00:07:58.752 "name": "BaseBdev3", 00:07:58.753 "aliases": [ 00:07:58.753 "7576000d-bdf2-4dfa-bfc2-5deee24331a4" 00:07:58.753 ], 00:07:58.753 "product_name": "Malloc disk", 00:07:58.753 "block_size": 512, 00:07:58.753 "num_blocks": 65536, 00:07:58.753 "uuid": "7576000d-bdf2-4dfa-bfc2-5deee24331a4", 00:07:58.753 "assigned_rate_limits": { 00:07:58.753 "rw_ios_per_sec": 0, 00:07:58.753 "rw_mbytes_per_sec": 0, 00:07:58.753 "r_mbytes_per_sec": 0, 00:07:58.753 "w_mbytes_per_sec": 0 00:07:58.753 }, 00:07:58.753 "claimed": true, 00:07:58.753 "claim_type": "exclusive_write", 00:07:58.753 "zoned": false, 00:07:58.753 "supported_io_types": { 00:07:58.753 "read": true, 00:07:58.753 "write": true, 00:07:58.753 "unmap": true, 00:07:58.753 "flush": true, 00:07:58.753 "reset": true, 00:07:58.753 "nvme_admin": false, 00:07:58.753 "nvme_io": false, 00:07:58.753 "nvme_io_md": false, 00:07:58.753 "write_zeroes": true, 00:07:58.753 "zcopy": true, 00:07:58.753 "get_zone_info": false, 00:07:58.753 "zone_management": false, 00:07:58.753 "zone_append": false, 00:07:58.753 "compare": false, 00:07:58.753 "compare_and_write": false, 00:07:58.753 "abort": true, 00:07:58.753 "seek_hole": false, 00:07:58.753 "seek_data": false, 00:07:58.753 "copy": true, 00:07:58.753 "nvme_iov_md": false 00:07:58.753 }, 00:07:58.753 "memory_domains": [ 00:07:58.753 { 00:07:58.753 "dma_device_id": "system", 00:07:58.753 "dma_device_type": 1 00:07:58.753 }, 00:07:58.753 { 00:07:58.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.753 "dma_device_type": 2 00:07:58.753 } 00:07:58.753 ], 00:07:58.753 "driver_specific": {} 00:07:58.753 } 00:07:58.753 ] 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.753 "name": "Existed_Raid", 00:07:58.753 "uuid": "f67c01f3-b6a4-45d0-9f1b-e5e32bf131d5", 00:07:58.753 "strip_size_kb": 64, 00:07:58.753 "state": "online", 00:07:58.753 "raid_level": "concat", 00:07:58.753 "superblock": false, 00:07:58.753 "num_base_bdevs": 3, 00:07:58.753 "num_base_bdevs_discovered": 3, 00:07:58.753 "num_base_bdevs_operational": 3, 00:07:58.753 "base_bdevs_list": [ 00:07:58.753 { 00:07:58.753 "name": "BaseBdev1", 00:07:58.753 "uuid": "48c8d251-d39b-4bde-b033-258d9284f35d", 00:07:58.753 "is_configured": true, 00:07:58.753 "data_offset": 0, 00:07:58.753 "data_size": 65536 00:07:58.753 }, 00:07:58.753 { 00:07:58.753 "name": "BaseBdev2", 00:07:58.753 "uuid": "5a348c76-1995-41fd-8c9c-f985f85397aa", 00:07:58.753 "is_configured": true, 00:07:58.753 "data_offset": 0, 00:07:58.753 "data_size": 65536 00:07:58.753 }, 00:07:58.753 { 00:07:58.753 "name": "BaseBdev3", 00:07:58.753 "uuid": "7576000d-bdf2-4dfa-bfc2-5deee24331a4", 00:07:58.753 "is_configured": true, 00:07:58.753 "data_offset": 0, 00:07:58.753 "data_size": 65536 00:07:58.753 } 00:07:58.753 ] 00:07:58.753 }' 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.753 23:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.019 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.019 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.019 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.019 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.019 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.019 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.019 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.019 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.019 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.019 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.019 [2024-11-02 23:47:53.100574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.284 "name": "Existed_Raid", 00:07:59.284 "aliases": [ 00:07:59.284 "f67c01f3-b6a4-45d0-9f1b-e5e32bf131d5" 00:07:59.284 ], 00:07:59.284 "product_name": "Raid Volume", 00:07:59.284 "block_size": 512, 00:07:59.284 "num_blocks": 196608, 00:07:59.284 "uuid": "f67c01f3-b6a4-45d0-9f1b-e5e32bf131d5", 00:07:59.284 "assigned_rate_limits": { 00:07:59.284 "rw_ios_per_sec": 0, 00:07:59.284 "rw_mbytes_per_sec": 0, 00:07:59.284 "r_mbytes_per_sec": 0, 00:07:59.284 "w_mbytes_per_sec": 0 00:07:59.284 }, 00:07:59.284 "claimed": false, 00:07:59.284 "zoned": false, 00:07:59.284 "supported_io_types": { 00:07:59.284 "read": true, 00:07:59.284 "write": true, 00:07:59.284 "unmap": true, 00:07:59.284 "flush": true, 00:07:59.284 "reset": true, 00:07:59.284 "nvme_admin": false, 00:07:59.284 "nvme_io": false, 00:07:59.284 "nvme_io_md": false, 00:07:59.284 "write_zeroes": true, 00:07:59.284 "zcopy": false, 00:07:59.284 "get_zone_info": false, 00:07:59.284 "zone_management": false, 00:07:59.284 "zone_append": false, 00:07:59.284 "compare": false, 00:07:59.284 "compare_and_write": false, 00:07:59.284 "abort": false, 00:07:59.284 "seek_hole": false, 00:07:59.284 "seek_data": false, 00:07:59.284 "copy": false, 00:07:59.284 "nvme_iov_md": false 00:07:59.284 }, 00:07:59.284 "memory_domains": [ 00:07:59.284 { 00:07:59.284 "dma_device_id": "system", 00:07:59.284 "dma_device_type": 1 00:07:59.284 }, 00:07:59.284 { 00:07:59.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.284 "dma_device_type": 2 00:07:59.284 }, 00:07:59.284 { 00:07:59.284 "dma_device_id": "system", 00:07:59.284 "dma_device_type": 1 00:07:59.284 }, 00:07:59.284 { 00:07:59.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.284 "dma_device_type": 2 00:07:59.284 }, 00:07:59.284 { 00:07:59.284 "dma_device_id": "system", 00:07:59.284 "dma_device_type": 1 00:07:59.284 }, 00:07:59.284 { 00:07:59.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.284 "dma_device_type": 2 00:07:59.284 } 00:07:59.284 ], 00:07:59.284 "driver_specific": { 00:07:59.284 "raid": { 00:07:59.284 "uuid": "f67c01f3-b6a4-45d0-9f1b-e5e32bf131d5", 00:07:59.284 "strip_size_kb": 64, 00:07:59.284 "state": "online", 00:07:59.284 "raid_level": "concat", 00:07:59.284 "superblock": false, 00:07:59.284 "num_base_bdevs": 3, 00:07:59.284 "num_base_bdevs_discovered": 3, 00:07:59.284 "num_base_bdevs_operational": 3, 00:07:59.284 "base_bdevs_list": [ 00:07:59.284 { 00:07:59.284 "name": "BaseBdev1", 00:07:59.284 "uuid": "48c8d251-d39b-4bde-b033-258d9284f35d", 00:07:59.284 "is_configured": true, 00:07:59.284 "data_offset": 0, 00:07:59.284 "data_size": 65536 00:07:59.284 }, 00:07:59.284 { 00:07:59.284 "name": "BaseBdev2", 00:07:59.284 "uuid": "5a348c76-1995-41fd-8c9c-f985f85397aa", 00:07:59.284 "is_configured": true, 00:07:59.284 "data_offset": 0, 00:07:59.284 "data_size": 65536 00:07:59.284 }, 00:07:59.284 { 00:07:59.284 "name": "BaseBdev3", 00:07:59.284 "uuid": "7576000d-bdf2-4dfa-bfc2-5deee24331a4", 00:07:59.284 "is_configured": true, 00:07:59.284 "data_offset": 0, 00:07:59.284 "data_size": 65536 00:07:59.284 } 00:07:59.284 ] 00:07:59.284 } 00:07:59.284 } 00:07:59.284 }' 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.284 BaseBdev2 00:07:59.284 BaseBdev3' 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.284 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.285 [2024-11-02 23:47:53.339917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.285 [2024-11-02 23:47:53.339982] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.285 [2024-11-02 23:47:53.340052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.285 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.544 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.544 "name": "Existed_Raid", 00:07:59.544 "uuid": "f67c01f3-b6a4-45d0-9f1b-e5e32bf131d5", 00:07:59.544 "strip_size_kb": 64, 00:07:59.544 "state": "offline", 00:07:59.544 "raid_level": "concat", 00:07:59.544 "superblock": false, 00:07:59.544 "num_base_bdevs": 3, 00:07:59.544 "num_base_bdevs_discovered": 2, 00:07:59.544 "num_base_bdevs_operational": 2, 00:07:59.544 "base_bdevs_list": [ 00:07:59.544 { 00:07:59.544 "name": null, 00:07:59.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.544 "is_configured": false, 00:07:59.544 "data_offset": 0, 00:07:59.544 "data_size": 65536 00:07:59.544 }, 00:07:59.544 { 00:07:59.544 "name": "BaseBdev2", 00:07:59.544 "uuid": "5a348c76-1995-41fd-8c9c-f985f85397aa", 00:07:59.544 "is_configured": true, 00:07:59.544 "data_offset": 0, 00:07:59.544 "data_size": 65536 00:07:59.544 }, 00:07:59.544 { 00:07:59.544 "name": "BaseBdev3", 00:07:59.544 "uuid": "7576000d-bdf2-4dfa-bfc2-5deee24331a4", 00:07:59.544 "is_configured": true, 00:07:59.544 "data_offset": 0, 00:07:59.544 "data_size": 65536 00:07:59.544 } 00:07:59.544 ] 00:07:59.544 }' 00:07:59.544 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.544 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.811 [2024-11-02 23:47:53.874315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.811 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 [2024-11-02 23:47:53.941373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:00.073 [2024-11-02 23:47:53.941471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:00.073 23:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 BaseBdev2 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 [ 00:08:00.073 { 00:08:00.073 "name": "BaseBdev2", 00:08:00.073 "aliases": [ 00:08:00.073 "c7734000-f7a6-46d4-b18c-f745d3271c92" 00:08:00.073 ], 00:08:00.073 "product_name": "Malloc disk", 00:08:00.073 "block_size": 512, 00:08:00.073 "num_blocks": 65536, 00:08:00.073 "uuid": "c7734000-f7a6-46d4-b18c-f745d3271c92", 00:08:00.073 "assigned_rate_limits": { 00:08:00.073 "rw_ios_per_sec": 0, 00:08:00.073 "rw_mbytes_per_sec": 0, 00:08:00.073 "r_mbytes_per_sec": 0, 00:08:00.073 "w_mbytes_per_sec": 0 00:08:00.073 }, 00:08:00.073 "claimed": false, 00:08:00.073 "zoned": false, 00:08:00.073 "supported_io_types": { 00:08:00.073 "read": true, 00:08:00.073 "write": true, 00:08:00.073 "unmap": true, 00:08:00.073 "flush": true, 00:08:00.073 "reset": true, 00:08:00.073 "nvme_admin": false, 00:08:00.073 "nvme_io": false, 00:08:00.073 "nvme_io_md": false, 00:08:00.073 "write_zeroes": true, 00:08:00.073 "zcopy": true, 00:08:00.073 "get_zone_info": false, 00:08:00.073 "zone_management": false, 00:08:00.073 "zone_append": false, 00:08:00.073 "compare": false, 00:08:00.073 "compare_and_write": false, 00:08:00.073 "abort": true, 00:08:00.073 "seek_hole": false, 00:08:00.073 "seek_data": false, 00:08:00.073 "copy": true, 00:08:00.073 "nvme_iov_md": false 00:08:00.073 }, 00:08:00.073 "memory_domains": [ 00:08:00.073 { 00:08:00.073 "dma_device_id": "system", 00:08:00.073 "dma_device_type": 1 00:08:00.073 }, 00:08:00.073 { 00:08:00.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.073 "dma_device_type": 2 00:08:00.073 } 00:08:00.073 ], 00:08:00.073 "driver_specific": {} 00:08:00.073 } 00:08:00.073 ] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 BaseBdev3 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 [ 00:08:00.073 { 00:08:00.073 "name": "BaseBdev3", 00:08:00.073 "aliases": [ 00:08:00.073 "eb6f93d3-0574-47c8-af47-4abb223928c8" 00:08:00.073 ], 00:08:00.073 "product_name": "Malloc disk", 00:08:00.073 "block_size": 512, 00:08:00.073 "num_blocks": 65536, 00:08:00.073 "uuid": "eb6f93d3-0574-47c8-af47-4abb223928c8", 00:08:00.073 "assigned_rate_limits": { 00:08:00.073 "rw_ios_per_sec": 0, 00:08:00.073 "rw_mbytes_per_sec": 0, 00:08:00.073 "r_mbytes_per_sec": 0, 00:08:00.073 "w_mbytes_per_sec": 0 00:08:00.073 }, 00:08:00.073 "claimed": false, 00:08:00.073 "zoned": false, 00:08:00.073 "supported_io_types": { 00:08:00.073 "read": true, 00:08:00.073 "write": true, 00:08:00.073 "unmap": true, 00:08:00.073 "flush": true, 00:08:00.073 "reset": true, 00:08:00.073 "nvme_admin": false, 00:08:00.073 "nvme_io": false, 00:08:00.073 "nvme_io_md": false, 00:08:00.073 "write_zeroes": true, 00:08:00.073 "zcopy": true, 00:08:00.073 "get_zone_info": false, 00:08:00.073 "zone_management": false, 00:08:00.073 "zone_append": false, 00:08:00.073 "compare": false, 00:08:00.073 "compare_and_write": false, 00:08:00.073 "abort": true, 00:08:00.073 "seek_hole": false, 00:08:00.073 "seek_data": false, 00:08:00.073 "copy": true, 00:08:00.073 "nvme_iov_md": false 00:08:00.073 }, 00:08:00.073 "memory_domains": [ 00:08:00.073 { 00:08:00.073 "dma_device_id": "system", 00:08:00.073 "dma_device_type": 1 00:08:00.073 }, 00:08:00.073 { 00:08:00.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.073 "dma_device_type": 2 00:08:00.073 } 00:08:00.073 ], 00:08:00.073 "driver_specific": {} 00:08:00.073 } 00:08:00.073 ] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:00.073 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.074 [2024-11-02 23:47:54.111590] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.074 [2024-11-02 23:47:54.111686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.074 [2024-11-02 23:47:54.111725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.074 [2024-11-02 23:47:54.113451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.074 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.333 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.333 "name": "Existed_Raid", 00:08:00.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.333 "strip_size_kb": 64, 00:08:00.333 "state": "configuring", 00:08:00.333 "raid_level": "concat", 00:08:00.333 "superblock": false, 00:08:00.333 "num_base_bdevs": 3, 00:08:00.333 "num_base_bdevs_discovered": 2, 00:08:00.333 "num_base_bdevs_operational": 3, 00:08:00.333 "base_bdevs_list": [ 00:08:00.333 { 00:08:00.333 "name": "BaseBdev1", 00:08:00.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.333 "is_configured": false, 00:08:00.333 "data_offset": 0, 00:08:00.333 "data_size": 0 00:08:00.333 }, 00:08:00.333 { 00:08:00.333 "name": "BaseBdev2", 00:08:00.333 "uuid": "c7734000-f7a6-46d4-b18c-f745d3271c92", 00:08:00.333 "is_configured": true, 00:08:00.333 "data_offset": 0, 00:08:00.333 "data_size": 65536 00:08:00.333 }, 00:08:00.333 { 00:08:00.333 "name": "BaseBdev3", 00:08:00.333 "uuid": "eb6f93d3-0574-47c8-af47-4abb223928c8", 00:08:00.333 "is_configured": true, 00:08:00.333 "data_offset": 0, 00:08:00.333 "data_size": 65536 00:08:00.333 } 00:08:00.333 ] 00:08:00.333 }' 00:08:00.333 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.333 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.592 [2024-11-02 23:47:54.526849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.592 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.593 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.593 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.593 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.593 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.593 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.593 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.593 "name": "Existed_Raid", 00:08:00.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.593 "strip_size_kb": 64, 00:08:00.593 "state": "configuring", 00:08:00.593 "raid_level": "concat", 00:08:00.593 "superblock": false, 00:08:00.593 "num_base_bdevs": 3, 00:08:00.593 "num_base_bdevs_discovered": 1, 00:08:00.593 "num_base_bdevs_operational": 3, 00:08:00.593 "base_bdevs_list": [ 00:08:00.593 { 00:08:00.593 "name": "BaseBdev1", 00:08:00.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.593 "is_configured": false, 00:08:00.593 "data_offset": 0, 00:08:00.593 "data_size": 0 00:08:00.593 }, 00:08:00.593 { 00:08:00.593 "name": null, 00:08:00.593 "uuid": "c7734000-f7a6-46d4-b18c-f745d3271c92", 00:08:00.593 "is_configured": false, 00:08:00.593 "data_offset": 0, 00:08:00.593 "data_size": 65536 00:08:00.593 }, 00:08:00.593 { 00:08:00.593 "name": "BaseBdev3", 00:08:00.593 "uuid": "eb6f93d3-0574-47c8-af47-4abb223928c8", 00:08:00.593 "is_configured": true, 00:08:00.593 "data_offset": 0, 00:08:00.593 "data_size": 65536 00:08:00.593 } 00:08:00.593 ] 00:08:00.593 }' 00:08:00.593 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.593 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.170 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.170 23:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:01.170 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.170 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.170 23:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.170 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:01.170 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.170 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.170 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.170 [2024-11-02 23:47:55.017103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.171 BaseBdev1 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.171 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.171 [ 00:08:01.171 { 00:08:01.171 "name": "BaseBdev1", 00:08:01.171 "aliases": [ 00:08:01.171 "fddbb275-7e36-4994-9f54-4d44538e35cc" 00:08:01.171 ], 00:08:01.171 "product_name": "Malloc disk", 00:08:01.171 "block_size": 512, 00:08:01.171 "num_blocks": 65536, 00:08:01.171 "uuid": "fddbb275-7e36-4994-9f54-4d44538e35cc", 00:08:01.172 "assigned_rate_limits": { 00:08:01.172 "rw_ios_per_sec": 0, 00:08:01.172 "rw_mbytes_per_sec": 0, 00:08:01.172 "r_mbytes_per_sec": 0, 00:08:01.172 "w_mbytes_per_sec": 0 00:08:01.172 }, 00:08:01.172 "claimed": true, 00:08:01.172 "claim_type": "exclusive_write", 00:08:01.172 "zoned": false, 00:08:01.172 "supported_io_types": { 00:08:01.172 "read": true, 00:08:01.172 "write": true, 00:08:01.172 "unmap": true, 00:08:01.172 "flush": true, 00:08:01.172 "reset": true, 00:08:01.172 "nvme_admin": false, 00:08:01.172 "nvme_io": false, 00:08:01.172 "nvme_io_md": false, 00:08:01.172 "write_zeroes": true, 00:08:01.172 "zcopy": true, 00:08:01.172 "get_zone_info": false, 00:08:01.172 "zone_management": false, 00:08:01.172 "zone_append": false, 00:08:01.172 "compare": false, 00:08:01.172 "compare_and_write": false, 00:08:01.172 "abort": true, 00:08:01.172 "seek_hole": false, 00:08:01.172 "seek_data": false, 00:08:01.172 "copy": true, 00:08:01.172 "nvme_iov_md": false 00:08:01.172 }, 00:08:01.172 "memory_domains": [ 00:08:01.172 { 00:08:01.172 "dma_device_id": "system", 00:08:01.172 "dma_device_type": 1 00:08:01.172 }, 00:08:01.172 { 00:08:01.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.172 "dma_device_type": 2 00:08:01.172 } 00:08:01.172 ], 00:08:01.172 "driver_specific": {} 00:08:01.172 } 00:08:01.172 ] 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.173 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.175 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.175 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.175 "name": "Existed_Raid", 00:08:01.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.176 "strip_size_kb": 64, 00:08:01.176 "state": "configuring", 00:08:01.176 "raid_level": "concat", 00:08:01.176 "superblock": false, 00:08:01.176 "num_base_bdevs": 3, 00:08:01.176 "num_base_bdevs_discovered": 2, 00:08:01.176 "num_base_bdevs_operational": 3, 00:08:01.176 "base_bdevs_list": [ 00:08:01.176 { 00:08:01.176 "name": "BaseBdev1", 00:08:01.176 "uuid": "fddbb275-7e36-4994-9f54-4d44538e35cc", 00:08:01.176 "is_configured": true, 00:08:01.176 "data_offset": 0, 00:08:01.176 "data_size": 65536 00:08:01.176 }, 00:08:01.176 { 00:08:01.176 "name": null, 00:08:01.176 "uuid": "c7734000-f7a6-46d4-b18c-f745d3271c92", 00:08:01.176 "is_configured": false, 00:08:01.176 "data_offset": 0, 00:08:01.176 "data_size": 65536 00:08:01.176 }, 00:08:01.176 { 00:08:01.176 "name": "BaseBdev3", 00:08:01.176 "uuid": "eb6f93d3-0574-47c8-af47-4abb223928c8", 00:08:01.176 "is_configured": true, 00:08:01.176 "data_offset": 0, 00:08:01.176 "data_size": 65536 00:08:01.176 } 00:08:01.176 ] 00:08:01.176 }' 00:08:01.176 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.176 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.441 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.441 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:01.441 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.441 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.441 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.701 [2024-11-02 23:47:55.552261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.701 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.702 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.702 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.702 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.702 "name": "Existed_Raid", 00:08:01.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.702 "strip_size_kb": 64, 00:08:01.702 "state": "configuring", 00:08:01.702 "raid_level": "concat", 00:08:01.702 "superblock": false, 00:08:01.702 "num_base_bdevs": 3, 00:08:01.702 "num_base_bdevs_discovered": 1, 00:08:01.702 "num_base_bdevs_operational": 3, 00:08:01.702 "base_bdevs_list": [ 00:08:01.702 { 00:08:01.702 "name": "BaseBdev1", 00:08:01.702 "uuid": "fddbb275-7e36-4994-9f54-4d44538e35cc", 00:08:01.702 "is_configured": true, 00:08:01.702 "data_offset": 0, 00:08:01.702 "data_size": 65536 00:08:01.702 }, 00:08:01.702 { 00:08:01.702 "name": null, 00:08:01.702 "uuid": "c7734000-f7a6-46d4-b18c-f745d3271c92", 00:08:01.702 "is_configured": false, 00:08:01.702 "data_offset": 0, 00:08:01.702 "data_size": 65536 00:08:01.702 }, 00:08:01.702 { 00:08:01.702 "name": null, 00:08:01.702 "uuid": "eb6f93d3-0574-47c8-af47-4abb223928c8", 00:08:01.702 "is_configured": false, 00:08:01.702 "data_offset": 0, 00:08:01.702 "data_size": 65536 00:08:01.702 } 00:08:01.702 ] 00:08:01.702 }' 00:08:01.702 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.702 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.962 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.962 23:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:01.962 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.962 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.962 23:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.962 [2024-11-02 23:47:56.019450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.962 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.222 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.222 "name": "Existed_Raid", 00:08:02.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.222 "strip_size_kb": 64, 00:08:02.222 "state": "configuring", 00:08:02.222 "raid_level": "concat", 00:08:02.222 "superblock": false, 00:08:02.222 "num_base_bdevs": 3, 00:08:02.222 "num_base_bdevs_discovered": 2, 00:08:02.222 "num_base_bdevs_operational": 3, 00:08:02.222 "base_bdevs_list": [ 00:08:02.222 { 00:08:02.222 "name": "BaseBdev1", 00:08:02.222 "uuid": "fddbb275-7e36-4994-9f54-4d44538e35cc", 00:08:02.222 "is_configured": true, 00:08:02.222 "data_offset": 0, 00:08:02.222 "data_size": 65536 00:08:02.222 }, 00:08:02.222 { 00:08:02.222 "name": null, 00:08:02.222 "uuid": "c7734000-f7a6-46d4-b18c-f745d3271c92", 00:08:02.222 "is_configured": false, 00:08:02.222 "data_offset": 0, 00:08:02.222 "data_size": 65536 00:08:02.222 }, 00:08:02.222 { 00:08:02.222 "name": "BaseBdev3", 00:08:02.222 "uuid": "eb6f93d3-0574-47c8-af47-4abb223928c8", 00:08:02.222 "is_configured": true, 00:08:02.222 "data_offset": 0, 00:08:02.222 "data_size": 65536 00:08:02.222 } 00:08:02.222 ] 00:08:02.222 }' 00:08:02.222 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.222 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.484 [2024-11-02 23:47:56.502687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.484 "name": "Existed_Raid", 00:08:02.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.484 "strip_size_kb": 64, 00:08:02.484 "state": "configuring", 00:08:02.484 "raid_level": "concat", 00:08:02.484 "superblock": false, 00:08:02.484 "num_base_bdevs": 3, 00:08:02.484 "num_base_bdevs_discovered": 1, 00:08:02.484 "num_base_bdevs_operational": 3, 00:08:02.484 "base_bdevs_list": [ 00:08:02.484 { 00:08:02.484 "name": null, 00:08:02.484 "uuid": "fddbb275-7e36-4994-9f54-4d44538e35cc", 00:08:02.484 "is_configured": false, 00:08:02.484 "data_offset": 0, 00:08:02.484 "data_size": 65536 00:08:02.484 }, 00:08:02.484 { 00:08:02.484 "name": null, 00:08:02.484 "uuid": "c7734000-f7a6-46d4-b18c-f745d3271c92", 00:08:02.484 "is_configured": false, 00:08:02.484 "data_offset": 0, 00:08:02.484 "data_size": 65536 00:08:02.484 }, 00:08:02.484 { 00:08:02.484 "name": "BaseBdev3", 00:08:02.484 "uuid": "eb6f93d3-0574-47c8-af47-4abb223928c8", 00:08:02.484 "is_configured": true, 00:08:02.484 "data_offset": 0, 00:08:02.484 "data_size": 65536 00:08:02.484 } 00:08:02.484 ] 00:08:02.484 }' 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.484 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.054 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.054 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.054 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.054 23:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:03.054 23:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.054 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:03.054 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:03.054 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.054 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.055 [2024-11-02 23:47:57.032366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.055 "name": "Existed_Raid", 00:08:03.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.055 "strip_size_kb": 64, 00:08:03.055 "state": "configuring", 00:08:03.055 "raid_level": "concat", 00:08:03.055 "superblock": false, 00:08:03.055 "num_base_bdevs": 3, 00:08:03.055 "num_base_bdevs_discovered": 2, 00:08:03.055 "num_base_bdevs_operational": 3, 00:08:03.055 "base_bdevs_list": [ 00:08:03.055 { 00:08:03.055 "name": null, 00:08:03.055 "uuid": "fddbb275-7e36-4994-9f54-4d44538e35cc", 00:08:03.055 "is_configured": false, 00:08:03.055 "data_offset": 0, 00:08:03.055 "data_size": 65536 00:08:03.055 }, 00:08:03.055 { 00:08:03.055 "name": "BaseBdev2", 00:08:03.055 "uuid": "c7734000-f7a6-46d4-b18c-f745d3271c92", 00:08:03.055 "is_configured": true, 00:08:03.055 "data_offset": 0, 00:08:03.055 "data_size": 65536 00:08:03.055 }, 00:08:03.055 { 00:08:03.055 "name": "BaseBdev3", 00:08:03.055 "uuid": "eb6f93d3-0574-47c8-af47-4abb223928c8", 00:08:03.055 "is_configured": true, 00:08:03.055 "data_offset": 0, 00:08:03.055 "data_size": 65536 00:08:03.055 } 00:08:03.055 ] 00:08:03.055 }' 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.055 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fddbb275-7e36-4994-9f54-4d44538e35cc 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 [2024-11-02 23:47:57.626188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:03.623 [2024-11-02 23:47:57.626303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:03.623 [2024-11-02 23:47:57.626330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:03.623 [2024-11-02 23:47:57.626618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:03.623 [2024-11-02 23:47:57.626791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:03.623 [2024-11-02 23:47:57.626835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:03.623 [2024-11-02 23:47:57.627062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.623 NewBaseBdev 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 [ 00:08:03.623 { 00:08:03.623 "name": "NewBaseBdev", 00:08:03.623 "aliases": [ 00:08:03.623 "fddbb275-7e36-4994-9f54-4d44538e35cc" 00:08:03.623 ], 00:08:03.623 "product_name": "Malloc disk", 00:08:03.623 "block_size": 512, 00:08:03.623 "num_blocks": 65536, 00:08:03.623 "uuid": "fddbb275-7e36-4994-9f54-4d44538e35cc", 00:08:03.623 "assigned_rate_limits": { 00:08:03.623 "rw_ios_per_sec": 0, 00:08:03.623 "rw_mbytes_per_sec": 0, 00:08:03.623 "r_mbytes_per_sec": 0, 00:08:03.623 "w_mbytes_per_sec": 0 00:08:03.623 }, 00:08:03.623 "claimed": true, 00:08:03.623 "claim_type": "exclusive_write", 00:08:03.623 "zoned": false, 00:08:03.623 "supported_io_types": { 00:08:03.623 "read": true, 00:08:03.623 "write": true, 00:08:03.623 "unmap": true, 00:08:03.623 "flush": true, 00:08:03.623 "reset": true, 00:08:03.623 "nvme_admin": false, 00:08:03.623 "nvme_io": false, 00:08:03.623 "nvme_io_md": false, 00:08:03.623 "write_zeroes": true, 00:08:03.623 "zcopy": true, 00:08:03.623 "get_zone_info": false, 00:08:03.623 "zone_management": false, 00:08:03.623 "zone_append": false, 00:08:03.623 "compare": false, 00:08:03.623 "compare_and_write": false, 00:08:03.623 "abort": true, 00:08:03.623 "seek_hole": false, 00:08:03.623 "seek_data": false, 00:08:03.623 "copy": true, 00:08:03.623 "nvme_iov_md": false 00:08:03.623 }, 00:08:03.623 "memory_domains": [ 00:08:03.623 { 00:08:03.623 "dma_device_id": "system", 00:08:03.623 "dma_device_type": 1 00:08:03.623 }, 00:08:03.623 { 00:08:03.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.623 "dma_device_type": 2 00:08:03.623 } 00:08:03.623 ], 00:08:03.623 "driver_specific": {} 00:08:03.623 } 00:08:03.623 ] 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.623 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.882 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.882 "name": "Existed_Raid", 00:08:03.882 "uuid": "391b6ab6-8e44-4914-9303-9f1a10ebb916", 00:08:03.882 "strip_size_kb": 64, 00:08:03.882 "state": "online", 00:08:03.882 "raid_level": "concat", 00:08:03.882 "superblock": false, 00:08:03.882 "num_base_bdevs": 3, 00:08:03.882 "num_base_bdevs_discovered": 3, 00:08:03.882 "num_base_bdevs_operational": 3, 00:08:03.882 "base_bdevs_list": [ 00:08:03.882 { 00:08:03.882 "name": "NewBaseBdev", 00:08:03.882 "uuid": "fddbb275-7e36-4994-9f54-4d44538e35cc", 00:08:03.882 "is_configured": true, 00:08:03.882 "data_offset": 0, 00:08:03.882 "data_size": 65536 00:08:03.882 }, 00:08:03.882 { 00:08:03.882 "name": "BaseBdev2", 00:08:03.882 "uuid": "c7734000-f7a6-46d4-b18c-f745d3271c92", 00:08:03.882 "is_configured": true, 00:08:03.882 "data_offset": 0, 00:08:03.882 "data_size": 65536 00:08:03.882 }, 00:08:03.882 { 00:08:03.882 "name": "BaseBdev3", 00:08:03.882 "uuid": "eb6f93d3-0574-47c8-af47-4abb223928c8", 00:08:03.882 "is_configured": true, 00:08:03.882 "data_offset": 0, 00:08:03.882 "data_size": 65536 00:08:03.882 } 00:08:03.882 ] 00:08:03.882 }' 00:08:03.882 23:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.882 23:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.140 [2024-11-02 23:47:58.149630] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.140 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.140 "name": "Existed_Raid", 00:08:04.140 "aliases": [ 00:08:04.141 "391b6ab6-8e44-4914-9303-9f1a10ebb916" 00:08:04.141 ], 00:08:04.141 "product_name": "Raid Volume", 00:08:04.141 "block_size": 512, 00:08:04.141 "num_blocks": 196608, 00:08:04.141 "uuid": "391b6ab6-8e44-4914-9303-9f1a10ebb916", 00:08:04.141 "assigned_rate_limits": { 00:08:04.141 "rw_ios_per_sec": 0, 00:08:04.141 "rw_mbytes_per_sec": 0, 00:08:04.141 "r_mbytes_per_sec": 0, 00:08:04.141 "w_mbytes_per_sec": 0 00:08:04.141 }, 00:08:04.141 "claimed": false, 00:08:04.141 "zoned": false, 00:08:04.141 "supported_io_types": { 00:08:04.141 "read": true, 00:08:04.141 "write": true, 00:08:04.141 "unmap": true, 00:08:04.141 "flush": true, 00:08:04.141 "reset": true, 00:08:04.141 "nvme_admin": false, 00:08:04.141 "nvme_io": false, 00:08:04.141 "nvme_io_md": false, 00:08:04.141 "write_zeroes": true, 00:08:04.141 "zcopy": false, 00:08:04.141 "get_zone_info": false, 00:08:04.141 "zone_management": false, 00:08:04.141 "zone_append": false, 00:08:04.141 "compare": false, 00:08:04.141 "compare_and_write": false, 00:08:04.141 "abort": false, 00:08:04.141 "seek_hole": false, 00:08:04.141 "seek_data": false, 00:08:04.141 "copy": false, 00:08:04.141 "nvme_iov_md": false 00:08:04.141 }, 00:08:04.141 "memory_domains": [ 00:08:04.141 { 00:08:04.141 "dma_device_id": "system", 00:08:04.141 "dma_device_type": 1 00:08:04.141 }, 00:08:04.141 { 00:08:04.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.141 "dma_device_type": 2 00:08:04.141 }, 00:08:04.141 { 00:08:04.141 "dma_device_id": "system", 00:08:04.141 "dma_device_type": 1 00:08:04.141 }, 00:08:04.141 { 00:08:04.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.141 "dma_device_type": 2 00:08:04.141 }, 00:08:04.141 { 00:08:04.141 "dma_device_id": "system", 00:08:04.141 "dma_device_type": 1 00:08:04.141 }, 00:08:04.141 { 00:08:04.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.141 "dma_device_type": 2 00:08:04.141 } 00:08:04.141 ], 00:08:04.141 "driver_specific": { 00:08:04.141 "raid": { 00:08:04.141 "uuid": "391b6ab6-8e44-4914-9303-9f1a10ebb916", 00:08:04.141 "strip_size_kb": 64, 00:08:04.141 "state": "online", 00:08:04.141 "raid_level": "concat", 00:08:04.141 "superblock": false, 00:08:04.141 "num_base_bdevs": 3, 00:08:04.141 "num_base_bdevs_discovered": 3, 00:08:04.141 "num_base_bdevs_operational": 3, 00:08:04.141 "base_bdevs_list": [ 00:08:04.141 { 00:08:04.141 "name": "NewBaseBdev", 00:08:04.141 "uuid": "fddbb275-7e36-4994-9f54-4d44538e35cc", 00:08:04.141 "is_configured": true, 00:08:04.141 "data_offset": 0, 00:08:04.141 "data_size": 65536 00:08:04.141 }, 00:08:04.141 { 00:08:04.141 "name": "BaseBdev2", 00:08:04.141 "uuid": "c7734000-f7a6-46d4-b18c-f745d3271c92", 00:08:04.141 "is_configured": true, 00:08:04.141 "data_offset": 0, 00:08:04.141 "data_size": 65536 00:08:04.141 }, 00:08:04.141 { 00:08:04.141 "name": "BaseBdev3", 00:08:04.141 "uuid": "eb6f93d3-0574-47c8-af47-4abb223928c8", 00:08:04.141 "is_configured": true, 00:08:04.141 "data_offset": 0, 00:08:04.141 "data_size": 65536 00:08:04.141 } 00:08:04.141 ] 00:08:04.141 } 00:08:04.141 } 00:08:04.141 }' 00:08:04.141 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.141 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:04.141 BaseBdev2 00:08:04.141 BaseBdev3' 00:08:04.141 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.401 [2024-11-02 23:47:58.428854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.401 [2024-11-02 23:47:58.428916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.401 [2024-11-02 23:47:58.429004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.401 [2024-11-02 23:47:58.429073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.401 [2024-11-02 23:47:58.429124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76554 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 76554 ']' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 76554 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76554 00:08:04.401 killing process with pid 76554 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76554' 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 76554 00:08:04.401 [2024-11-02 23:47:58.479252] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.401 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 76554 00:08:04.661 [2024-11-02 23:47:58.509846] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.661 23:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.661 00:08:04.661 real 0m8.842s 00:08:04.661 user 0m15.169s 00:08:04.661 sys 0m1.809s 00:08:04.661 ************************************ 00:08:04.661 END TEST raid_state_function_test 00:08:04.661 ************************************ 00:08:04.661 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.661 23:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.921 23:47:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:04.921 23:47:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:04.921 23:47:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.921 23:47:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.921 ************************************ 00:08:04.921 START TEST raid_state_function_test_sb 00:08:04.921 ************************************ 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77159 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77159' 00:08:04.921 Process raid pid: 77159 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77159 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 77159 ']' 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:04.921 23:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.921 [2024-11-02 23:47:58.879961] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:04.921 [2024-11-02 23:47:58.880142] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.181 [2024-11-02 23:47:59.034649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.181 [2024-11-02 23:47:59.059508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.181 [2024-11-02 23:47:59.100994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.181 [2024-11-02 23:47:59.101026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.749 [2024-11-02 23:47:59.713451] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.749 [2024-11-02 23:47:59.713560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.749 [2024-11-02 23:47:59.713591] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.749 [2024-11-02 23:47:59.713614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.749 [2024-11-02 23:47:59.713631] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.749 [2024-11-02 23:47:59.713656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.749 "name": "Existed_Raid", 00:08:05.749 "uuid": "351b6ab3-7e15-40fb-8529-6a8a84467739", 00:08:05.749 "strip_size_kb": 64, 00:08:05.749 "state": "configuring", 00:08:05.749 "raid_level": "concat", 00:08:05.749 "superblock": true, 00:08:05.749 "num_base_bdevs": 3, 00:08:05.749 "num_base_bdevs_discovered": 0, 00:08:05.749 "num_base_bdevs_operational": 3, 00:08:05.749 "base_bdevs_list": [ 00:08:05.749 { 00:08:05.749 "name": "BaseBdev1", 00:08:05.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.749 "is_configured": false, 00:08:05.749 "data_offset": 0, 00:08:05.749 "data_size": 0 00:08:05.749 }, 00:08:05.749 { 00:08:05.749 "name": "BaseBdev2", 00:08:05.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.749 "is_configured": false, 00:08:05.749 "data_offset": 0, 00:08:05.749 "data_size": 0 00:08:05.749 }, 00:08:05.749 { 00:08:05.749 "name": "BaseBdev3", 00:08:05.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.749 "is_configured": false, 00:08:05.749 "data_offset": 0, 00:08:05.749 "data_size": 0 00:08:05.749 } 00:08:05.749 ] 00:08:05.749 }' 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.749 23:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.317 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.317 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.317 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.317 [2024-11-02 23:48:00.172606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.317 [2024-11-02 23:48:00.172680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:06.317 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.317 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:06.317 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.317 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.317 [2024-11-02 23:48:00.184606] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.317 [2024-11-02 23:48:00.184698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.318 [2024-11-02 23:48:00.184710] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.318 [2024-11-02 23:48:00.184719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.318 [2024-11-02 23:48:00.184724] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.318 [2024-11-02 23:48:00.184733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.318 [2024-11-02 23:48:00.205187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.318 BaseBdev1 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.318 [ 00:08:06.318 { 00:08:06.318 "name": "BaseBdev1", 00:08:06.318 "aliases": [ 00:08:06.318 "0609022d-96de-41f8-9634-e85be6e2d139" 00:08:06.318 ], 00:08:06.318 "product_name": "Malloc disk", 00:08:06.318 "block_size": 512, 00:08:06.318 "num_blocks": 65536, 00:08:06.318 "uuid": "0609022d-96de-41f8-9634-e85be6e2d139", 00:08:06.318 "assigned_rate_limits": { 00:08:06.318 "rw_ios_per_sec": 0, 00:08:06.318 "rw_mbytes_per_sec": 0, 00:08:06.318 "r_mbytes_per_sec": 0, 00:08:06.318 "w_mbytes_per_sec": 0 00:08:06.318 }, 00:08:06.318 "claimed": true, 00:08:06.318 "claim_type": "exclusive_write", 00:08:06.318 "zoned": false, 00:08:06.318 "supported_io_types": { 00:08:06.318 "read": true, 00:08:06.318 "write": true, 00:08:06.318 "unmap": true, 00:08:06.318 "flush": true, 00:08:06.318 "reset": true, 00:08:06.318 "nvme_admin": false, 00:08:06.318 "nvme_io": false, 00:08:06.318 "nvme_io_md": false, 00:08:06.318 "write_zeroes": true, 00:08:06.318 "zcopy": true, 00:08:06.318 "get_zone_info": false, 00:08:06.318 "zone_management": false, 00:08:06.318 "zone_append": false, 00:08:06.318 "compare": false, 00:08:06.318 "compare_and_write": false, 00:08:06.318 "abort": true, 00:08:06.318 "seek_hole": false, 00:08:06.318 "seek_data": false, 00:08:06.318 "copy": true, 00:08:06.318 "nvme_iov_md": false 00:08:06.318 }, 00:08:06.318 "memory_domains": [ 00:08:06.318 { 00:08:06.318 "dma_device_id": "system", 00:08:06.318 "dma_device_type": 1 00:08:06.318 }, 00:08:06.318 { 00:08:06.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.318 "dma_device_type": 2 00:08:06.318 } 00:08:06.318 ], 00:08:06.318 "driver_specific": {} 00:08:06.318 } 00:08:06.318 ] 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.318 "name": "Existed_Raid", 00:08:06.318 "uuid": "f3c8604d-e859-4904-a64f-27a17f7320a8", 00:08:06.318 "strip_size_kb": 64, 00:08:06.318 "state": "configuring", 00:08:06.318 "raid_level": "concat", 00:08:06.318 "superblock": true, 00:08:06.318 "num_base_bdevs": 3, 00:08:06.318 "num_base_bdevs_discovered": 1, 00:08:06.318 "num_base_bdevs_operational": 3, 00:08:06.318 "base_bdevs_list": [ 00:08:06.318 { 00:08:06.318 "name": "BaseBdev1", 00:08:06.318 "uuid": "0609022d-96de-41f8-9634-e85be6e2d139", 00:08:06.318 "is_configured": true, 00:08:06.318 "data_offset": 2048, 00:08:06.318 "data_size": 63488 00:08:06.318 }, 00:08:06.318 { 00:08:06.318 "name": "BaseBdev2", 00:08:06.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.318 "is_configured": false, 00:08:06.318 "data_offset": 0, 00:08:06.318 "data_size": 0 00:08:06.318 }, 00:08:06.318 { 00:08:06.318 "name": "BaseBdev3", 00:08:06.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.318 "is_configured": false, 00:08:06.318 "data_offset": 0, 00:08:06.318 "data_size": 0 00:08:06.318 } 00:08:06.318 ] 00:08:06.318 }' 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.318 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.578 [2024-11-02 23:48:00.640470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.578 [2024-11-02 23:48:00.640519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.578 [2024-11-02 23:48:00.652494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.578 [2024-11-02 23:48:00.654298] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.578 [2024-11-02 23:48:00.654337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.578 [2024-11-02 23:48:00.654346] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.578 [2024-11-02 23:48:00.654362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.578 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.836 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.836 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.836 "name": "Existed_Raid", 00:08:06.836 "uuid": "e448a15d-4a35-4be1-a973-502ce2f2d651", 00:08:06.836 "strip_size_kb": 64, 00:08:06.836 "state": "configuring", 00:08:06.836 "raid_level": "concat", 00:08:06.836 "superblock": true, 00:08:06.836 "num_base_bdevs": 3, 00:08:06.836 "num_base_bdevs_discovered": 1, 00:08:06.836 "num_base_bdevs_operational": 3, 00:08:06.836 "base_bdevs_list": [ 00:08:06.836 { 00:08:06.836 "name": "BaseBdev1", 00:08:06.836 "uuid": "0609022d-96de-41f8-9634-e85be6e2d139", 00:08:06.836 "is_configured": true, 00:08:06.836 "data_offset": 2048, 00:08:06.836 "data_size": 63488 00:08:06.836 }, 00:08:06.836 { 00:08:06.836 "name": "BaseBdev2", 00:08:06.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.836 "is_configured": false, 00:08:06.836 "data_offset": 0, 00:08:06.836 "data_size": 0 00:08:06.836 }, 00:08:06.836 { 00:08:06.836 "name": "BaseBdev3", 00:08:06.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.836 "is_configured": false, 00:08:06.836 "data_offset": 0, 00:08:06.836 "data_size": 0 00:08:06.836 } 00:08:06.836 ] 00:08:06.836 }' 00:08:06.836 23:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.836 23:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.096 [2024-11-02 23:48:01.090471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.096 BaseBdev2 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.096 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.096 [ 00:08:07.096 { 00:08:07.096 "name": "BaseBdev2", 00:08:07.096 "aliases": [ 00:08:07.096 "e29f4a44-f937-44e4-874c-b12407212b77" 00:08:07.096 ], 00:08:07.096 "product_name": "Malloc disk", 00:08:07.096 "block_size": 512, 00:08:07.096 "num_blocks": 65536, 00:08:07.096 "uuid": "e29f4a44-f937-44e4-874c-b12407212b77", 00:08:07.096 "assigned_rate_limits": { 00:08:07.096 "rw_ios_per_sec": 0, 00:08:07.096 "rw_mbytes_per_sec": 0, 00:08:07.096 "r_mbytes_per_sec": 0, 00:08:07.096 "w_mbytes_per_sec": 0 00:08:07.096 }, 00:08:07.096 "claimed": true, 00:08:07.096 "claim_type": "exclusive_write", 00:08:07.096 "zoned": false, 00:08:07.096 "supported_io_types": { 00:08:07.096 "read": true, 00:08:07.096 "write": true, 00:08:07.096 "unmap": true, 00:08:07.096 "flush": true, 00:08:07.096 "reset": true, 00:08:07.096 "nvme_admin": false, 00:08:07.096 "nvme_io": false, 00:08:07.096 "nvme_io_md": false, 00:08:07.096 "write_zeroes": true, 00:08:07.096 "zcopy": true, 00:08:07.096 "get_zone_info": false, 00:08:07.096 "zone_management": false, 00:08:07.096 "zone_append": false, 00:08:07.096 "compare": false, 00:08:07.096 "compare_and_write": false, 00:08:07.096 "abort": true, 00:08:07.096 "seek_hole": false, 00:08:07.096 "seek_data": false, 00:08:07.096 "copy": true, 00:08:07.096 "nvme_iov_md": false 00:08:07.097 }, 00:08:07.097 "memory_domains": [ 00:08:07.097 { 00:08:07.097 "dma_device_id": "system", 00:08:07.097 "dma_device_type": 1 00:08:07.097 }, 00:08:07.097 { 00:08:07.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.097 "dma_device_type": 2 00:08:07.097 } 00:08:07.097 ], 00:08:07.097 "driver_specific": {} 00:08:07.097 } 00:08:07.097 ] 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.097 "name": "Existed_Raid", 00:08:07.097 "uuid": "e448a15d-4a35-4be1-a973-502ce2f2d651", 00:08:07.097 "strip_size_kb": 64, 00:08:07.097 "state": "configuring", 00:08:07.097 "raid_level": "concat", 00:08:07.097 "superblock": true, 00:08:07.097 "num_base_bdevs": 3, 00:08:07.097 "num_base_bdevs_discovered": 2, 00:08:07.097 "num_base_bdevs_operational": 3, 00:08:07.097 "base_bdevs_list": [ 00:08:07.097 { 00:08:07.097 "name": "BaseBdev1", 00:08:07.097 "uuid": "0609022d-96de-41f8-9634-e85be6e2d139", 00:08:07.097 "is_configured": true, 00:08:07.097 "data_offset": 2048, 00:08:07.097 "data_size": 63488 00:08:07.097 }, 00:08:07.097 { 00:08:07.097 "name": "BaseBdev2", 00:08:07.097 "uuid": "e29f4a44-f937-44e4-874c-b12407212b77", 00:08:07.097 "is_configured": true, 00:08:07.097 "data_offset": 2048, 00:08:07.097 "data_size": 63488 00:08:07.097 }, 00:08:07.097 { 00:08:07.097 "name": "BaseBdev3", 00:08:07.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.097 "is_configured": false, 00:08:07.097 "data_offset": 0, 00:08:07.097 "data_size": 0 00:08:07.097 } 00:08:07.097 ] 00:08:07.097 }' 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.097 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.664 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:07.664 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.664 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.664 [2024-11-02 23:48:01.596119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:07.664 [2024-11-02 23:48:01.596297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:07.664 [2024-11-02 23:48:01.596319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:07.664 [2024-11-02 23:48:01.596621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:07.664 [2024-11-02 23:48:01.596772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:07.664 [2024-11-02 23:48:01.596783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:07.664 [2024-11-02 23:48:01.596900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.664 BaseBdev3 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.665 [ 00:08:07.665 { 00:08:07.665 "name": "BaseBdev3", 00:08:07.665 "aliases": [ 00:08:07.665 "ef56bf18-672c-4079-883b-db3e25f689b3" 00:08:07.665 ], 00:08:07.665 "product_name": "Malloc disk", 00:08:07.665 "block_size": 512, 00:08:07.665 "num_blocks": 65536, 00:08:07.665 "uuid": "ef56bf18-672c-4079-883b-db3e25f689b3", 00:08:07.665 "assigned_rate_limits": { 00:08:07.665 "rw_ios_per_sec": 0, 00:08:07.665 "rw_mbytes_per_sec": 0, 00:08:07.665 "r_mbytes_per_sec": 0, 00:08:07.665 "w_mbytes_per_sec": 0 00:08:07.665 }, 00:08:07.665 "claimed": true, 00:08:07.665 "claim_type": "exclusive_write", 00:08:07.665 "zoned": false, 00:08:07.665 "supported_io_types": { 00:08:07.665 "read": true, 00:08:07.665 "write": true, 00:08:07.665 "unmap": true, 00:08:07.665 "flush": true, 00:08:07.665 "reset": true, 00:08:07.665 "nvme_admin": false, 00:08:07.665 "nvme_io": false, 00:08:07.665 "nvme_io_md": false, 00:08:07.665 "write_zeroes": true, 00:08:07.665 "zcopy": true, 00:08:07.665 "get_zone_info": false, 00:08:07.665 "zone_management": false, 00:08:07.665 "zone_append": false, 00:08:07.665 "compare": false, 00:08:07.665 "compare_and_write": false, 00:08:07.665 "abort": true, 00:08:07.665 "seek_hole": false, 00:08:07.665 "seek_data": false, 00:08:07.665 "copy": true, 00:08:07.665 "nvme_iov_md": false 00:08:07.665 }, 00:08:07.665 "memory_domains": [ 00:08:07.665 { 00:08:07.665 "dma_device_id": "system", 00:08:07.665 "dma_device_type": 1 00:08:07.665 }, 00:08:07.665 { 00:08:07.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.665 "dma_device_type": 2 00:08:07.665 } 00:08:07.665 ], 00:08:07.665 "driver_specific": {} 00:08:07.665 } 00:08:07.665 ] 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.665 "name": "Existed_Raid", 00:08:07.665 "uuid": "e448a15d-4a35-4be1-a973-502ce2f2d651", 00:08:07.665 "strip_size_kb": 64, 00:08:07.665 "state": "online", 00:08:07.665 "raid_level": "concat", 00:08:07.665 "superblock": true, 00:08:07.665 "num_base_bdevs": 3, 00:08:07.665 "num_base_bdevs_discovered": 3, 00:08:07.665 "num_base_bdevs_operational": 3, 00:08:07.665 "base_bdevs_list": [ 00:08:07.665 { 00:08:07.665 "name": "BaseBdev1", 00:08:07.665 "uuid": "0609022d-96de-41f8-9634-e85be6e2d139", 00:08:07.665 "is_configured": true, 00:08:07.665 "data_offset": 2048, 00:08:07.665 "data_size": 63488 00:08:07.665 }, 00:08:07.665 { 00:08:07.665 "name": "BaseBdev2", 00:08:07.665 "uuid": "e29f4a44-f937-44e4-874c-b12407212b77", 00:08:07.665 "is_configured": true, 00:08:07.665 "data_offset": 2048, 00:08:07.665 "data_size": 63488 00:08:07.665 }, 00:08:07.665 { 00:08:07.665 "name": "BaseBdev3", 00:08:07.665 "uuid": "ef56bf18-672c-4079-883b-db3e25f689b3", 00:08:07.665 "is_configured": true, 00:08:07.665 "data_offset": 2048, 00:08:07.665 "data_size": 63488 00:08:07.665 } 00:08:07.665 ] 00:08:07.665 }' 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.665 23:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.234 [2024-11-02 23:48:02.139614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.234 "name": "Existed_Raid", 00:08:08.234 "aliases": [ 00:08:08.234 "e448a15d-4a35-4be1-a973-502ce2f2d651" 00:08:08.234 ], 00:08:08.234 "product_name": "Raid Volume", 00:08:08.234 "block_size": 512, 00:08:08.234 "num_blocks": 190464, 00:08:08.234 "uuid": "e448a15d-4a35-4be1-a973-502ce2f2d651", 00:08:08.234 "assigned_rate_limits": { 00:08:08.234 "rw_ios_per_sec": 0, 00:08:08.234 "rw_mbytes_per_sec": 0, 00:08:08.234 "r_mbytes_per_sec": 0, 00:08:08.234 "w_mbytes_per_sec": 0 00:08:08.234 }, 00:08:08.234 "claimed": false, 00:08:08.234 "zoned": false, 00:08:08.234 "supported_io_types": { 00:08:08.234 "read": true, 00:08:08.234 "write": true, 00:08:08.234 "unmap": true, 00:08:08.234 "flush": true, 00:08:08.234 "reset": true, 00:08:08.234 "nvme_admin": false, 00:08:08.234 "nvme_io": false, 00:08:08.234 "nvme_io_md": false, 00:08:08.234 "write_zeroes": true, 00:08:08.234 "zcopy": false, 00:08:08.234 "get_zone_info": false, 00:08:08.234 "zone_management": false, 00:08:08.234 "zone_append": false, 00:08:08.234 "compare": false, 00:08:08.234 "compare_and_write": false, 00:08:08.234 "abort": false, 00:08:08.234 "seek_hole": false, 00:08:08.234 "seek_data": false, 00:08:08.234 "copy": false, 00:08:08.234 "nvme_iov_md": false 00:08:08.234 }, 00:08:08.234 "memory_domains": [ 00:08:08.234 { 00:08:08.234 "dma_device_id": "system", 00:08:08.234 "dma_device_type": 1 00:08:08.234 }, 00:08:08.234 { 00:08:08.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.234 "dma_device_type": 2 00:08:08.234 }, 00:08:08.234 { 00:08:08.234 "dma_device_id": "system", 00:08:08.234 "dma_device_type": 1 00:08:08.234 }, 00:08:08.234 { 00:08:08.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.234 "dma_device_type": 2 00:08:08.234 }, 00:08:08.234 { 00:08:08.234 "dma_device_id": "system", 00:08:08.234 "dma_device_type": 1 00:08:08.234 }, 00:08:08.234 { 00:08:08.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.234 "dma_device_type": 2 00:08:08.234 } 00:08:08.234 ], 00:08:08.234 "driver_specific": { 00:08:08.234 "raid": { 00:08:08.234 "uuid": "e448a15d-4a35-4be1-a973-502ce2f2d651", 00:08:08.234 "strip_size_kb": 64, 00:08:08.234 "state": "online", 00:08:08.234 "raid_level": "concat", 00:08:08.234 "superblock": true, 00:08:08.234 "num_base_bdevs": 3, 00:08:08.234 "num_base_bdevs_discovered": 3, 00:08:08.234 "num_base_bdevs_operational": 3, 00:08:08.234 "base_bdevs_list": [ 00:08:08.234 { 00:08:08.234 "name": "BaseBdev1", 00:08:08.234 "uuid": "0609022d-96de-41f8-9634-e85be6e2d139", 00:08:08.234 "is_configured": true, 00:08:08.234 "data_offset": 2048, 00:08:08.234 "data_size": 63488 00:08:08.234 }, 00:08:08.234 { 00:08:08.234 "name": "BaseBdev2", 00:08:08.234 "uuid": "e29f4a44-f937-44e4-874c-b12407212b77", 00:08:08.234 "is_configured": true, 00:08:08.234 "data_offset": 2048, 00:08:08.234 "data_size": 63488 00:08:08.234 }, 00:08:08.234 { 00:08:08.234 "name": "BaseBdev3", 00:08:08.234 "uuid": "ef56bf18-672c-4079-883b-db3e25f689b3", 00:08:08.234 "is_configured": true, 00:08:08.234 "data_offset": 2048, 00:08:08.234 "data_size": 63488 00:08:08.234 } 00:08:08.234 ] 00:08:08.234 } 00:08:08.234 } 00:08:08.234 }' 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:08.234 BaseBdev2 00:08:08.234 BaseBdev3' 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.234 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.494 [2024-11-02 23:48:02.370960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.494 [2024-11-02 23:48:02.370991] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.494 [2024-11-02 23:48:02.371049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.494 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.494 "name": "Existed_Raid", 00:08:08.494 "uuid": "e448a15d-4a35-4be1-a973-502ce2f2d651", 00:08:08.494 "strip_size_kb": 64, 00:08:08.494 "state": "offline", 00:08:08.494 "raid_level": "concat", 00:08:08.494 "superblock": true, 00:08:08.494 "num_base_bdevs": 3, 00:08:08.494 "num_base_bdevs_discovered": 2, 00:08:08.494 "num_base_bdevs_operational": 2, 00:08:08.494 "base_bdevs_list": [ 00:08:08.494 { 00:08:08.494 "name": null, 00:08:08.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.494 "is_configured": false, 00:08:08.494 "data_offset": 0, 00:08:08.494 "data_size": 63488 00:08:08.494 }, 00:08:08.494 { 00:08:08.494 "name": "BaseBdev2", 00:08:08.494 "uuid": "e29f4a44-f937-44e4-874c-b12407212b77", 00:08:08.494 "is_configured": true, 00:08:08.494 "data_offset": 2048, 00:08:08.494 "data_size": 63488 00:08:08.494 }, 00:08:08.494 { 00:08:08.494 "name": "BaseBdev3", 00:08:08.494 "uuid": "ef56bf18-672c-4079-883b-db3e25f689b3", 00:08:08.495 "is_configured": true, 00:08:08.495 "data_offset": 2048, 00:08:08.495 "data_size": 63488 00:08:08.495 } 00:08:08.495 ] 00:08:08.495 }' 00:08:08.495 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.495 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.752 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:08.752 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.752 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.752 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.752 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.752 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.752 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.010 [2024-11-02 23:48:02.873421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.010 [2024-11-02 23:48:02.944430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:09.010 [2024-11-02 23:48:02.944479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.010 23:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.011 BaseBdev2 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.011 [ 00:08:09.011 { 00:08:09.011 "name": "BaseBdev2", 00:08:09.011 "aliases": [ 00:08:09.011 "ed1615b5-ccef-4d62-9beb-ef6f83fed265" 00:08:09.011 ], 00:08:09.011 "product_name": "Malloc disk", 00:08:09.011 "block_size": 512, 00:08:09.011 "num_blocks": 65536, 00:08:09.011 "uuid": "ed1615b5-ccef-4d62-9beb-ef6f83fed265", 00:08:09.011 "assigned_rate_limits": { 00:08:09.011 "rw_ios_per_sec": 0, 00:08:09.011 "rw_mbytes_per_sec": 0, 00:08:09.011 "r_mbytes_per_sec": 0, 00:08:09.011 "w_mbytes_per_sec": 0 00:08:09.011 }, 00:08:09.011 "claimed": false, 00:08:09.011 "zoned": false, 00:08:09.011 "supported_io_types": { 00:08:09.011 "read": true, 00:08:09.011 "write": true, 00:08:09.011 "unmap": true, 00:08:09.011 "flush": true, 00:08:09.011 "reset": true, 00:08:09.011 "nvme_admin": false, 00:08:09.011 "nvme_io": false, 00:08:09.011 "nvme_io_md": false, 00:08:09.011 "write_zeroes": true, 00:08:09.011 "zcopy": true, 00:08:09.011 "get_zone_info": false, 00:08:09.011 "zone_management": false, 00:08:09.011 "zone_append": false, 00:08:09.011 "compare": false, 00:08:09.011 "compare_and_write": false, 00:08:09.011 "abort": true, 00:08:09.011 "seek_hole": false, 00:08:09.011 "seek_data": false, 00:08:09.011 "copy": true, 00:08:09.011 "nvme_iov_md": false 00:08:09.011 }, 00:08:09.011 "memory_domains": [ 00:08:09.011 { 00:08:09.011 "dma_device_id": "system", 00:08:09.011 "dma_device_type": 1 00:08:09.011 }, 00:08:09.011 { 00:08:09.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.011 "dma_device_type": 2 00:08:09.011 } 00:08:09.011 ], 00:08:09.011 "driver_specific": {} 00:08:09.011 } 00:08:09.011 ] 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.011 BaseBdev3 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.011 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.011 [ 00:08:09.011 { 00:08:09.011 "name": "BaseBdev3", 00:08:09.011 "aliases": [ 00:08:09.011 "4de23e7d-92da-40da-9674-b8e5f9cc1137" 00:08:09.011 ], 00:08:09.011 "product_name": "Malloc disk", 00:08:09.011 "block_size": 512, 00:08:09.011 "num_blocks": 65536, 00:08:09.011 "uuid": "4de23e7d-92da-40da-9674-b8e5f9cc1137", 00:08:09.011 "assigned_rate_limits": { 00:08:09.011 "rw_ios_per_sec": 0, 00:08:09.011 "rw_mbytes_per_sec": 0, 00:08:09.011 "r_mbytes_per_sec": 0, 00:08:09.011 "w_mbytes_per_sec": 0 00:08:09.011 }, 00:08:09.011 "claimed": false, 00:08:09.011 "zoned": false, 00:08:09.011 "supported_io_types": { 00:08:09.011 "read": true, 00:08:09.011 "write": true, 00:08:09.011 "unmap": true, 00:08:09.011 "flush": true, 00:08:09.011 "reset": true, 00:08:09.011 "nvme_admin": false, 00:08:09.011 "nvme_io": false, 00:08:09.011 "nvme_io_md": false, 00:08:09.011 "write_zeroes": true, 00:08:09.011 "zcopy": true, 00:08:09.011 "get_zone_info": false, 00:08:09.011 "zone_management": false, 00:08:09.011 "zone_append": false, 00:08:09.011 "compare": false, 00:08:09.011 "compare_and_write": false, 00:08:09.011 "abort": true, 00:08:09.011 "seek_hole": false, 00:08:09.011 "seek_data": false, 00:08:09.011 "copy": true, 00:08:09.271 "nvme_iov_md": false 00:08:09.271 }, 00:08:09.271 "memory_domains": [ 00:08:09.271 { 00:08:09.271 "dma_device_id": "system", 00:08:09.271 "dma_device_type": 1 00:08:09.271 }, 00:08:09.271 { 00:08:09.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.271 "dma_device_type": 2 00:08:09.271 } 00:08:09.271 ], 00:08:09.271 "driver_specific": {} 00:08:09.271 } 00:08:09.271 ] 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.271 [2024-11-02 23:48:03.114816] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.271 [2024-11-02 23:48:03.114856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.271 [2024-11-02 23:48:03.114876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.271 [2024-11-02 23:48:03.116640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.271 "name": "Existed_Raid", 00:08:09.271 "uuid": "efe26688-80f0-4194-9951-f186175e6ae5", 00:08:09.271 "strip_size_kb": 64, 00:08:09.271 "state": "configuring", 00:08:09.271 "raid_level": "concat", 00:08:09.271 "superblock": true, 00:08:09.271 "num_base_bdevs": 3, 00:08:09.271 "num_base_bdevs_discovered": 2, 00:08:09.271 "num_base_bdevs_operational": 3, 00:08:09.271 "base_bdevs_list": [ 00:08:09.271 { 00:08:09.271 "name": "BaseBdev1", 00:08:09.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.271 "is_configured": false, 00:08:09.271 "data_offset": 0, 00:08:09.271 "data_size": 0 00:08:09.271 }, 00:08:09.271 { 00:08:09.271 "name": "BaseBdev2", 00:08:09.271 "uuid": "ed1615b5-ccef-4d62-9beb-ef6f83fed265", 00:08:09.271 "is_configured": true, 00:08:09.271 "data_offset": 2048, 00:08:09.271 "data_size": 63488 00:08:09.271 }, 00:08:09.271 { 00:08:09.271 "name": "BaseBdev3", 00:08:09.271 "uuid": "4de23e7d-92da-40da-9674-b8e5f9cc1137", 00:08:09.271 "is_configured": true, 00:08:09.271 "data_offset": 2048, 00:08:09.271 "data_size": 63488 00:08:09.271 } 00:08:09.271 ] 00:08:09.271 }' 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.271 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.531 [2024-11-02 23:48:03.562087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.531 "name": "Existed_Raid", 00:08:09.531 "uuid": "efe26688-80f0-4194-9951-f186175e6ae5", 00:08:09.531 "strip_size_kb": 64, 00:08:09.531 "state": "configuring", 00:08:09.531 "raid_level": "concat", 00:08:09.531 "superblock": true, 00:08:09.531 "num_base_bdevs": 3, 00:08:09.531 "num_base_bdevs_discovered": 1, 00:08:09.531 "num_base_bdevs_operational": 3, 00:08:09.531 "base_bdevs_list": [ 00:08:09.531 { 00:08:09.531 "name": "BaseBdev1", 00:08:09.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.531 "is_configured": false, 00:08:09.531 "data_offset": 0, 00:08:09.531 "data_size": 0 00:08:09.531 }, 00:08:09.531 { 00:08:09.531 "name": null, 00:08:09.531 "uuid": "ed1615b5-ccef-4d62-9beb-ef6f83fed265", 00:08:09.531 "is_configured": false, 00:08:09.531 "data_offset": 0, 00:08:09.531 "data_size": 63488 00:08:09.531 }, 00:08:09.531 { 00:08:09.531 "name": "BaseBdev3", 00:08:09.531 "uuid": "4de23e7d-92da-40da-9674-b8e5f9cc1137", 00:08:09.531 "is_configured": true, 00:08:09.531 "data_offset": 2048, 00:08:09.531 "data_size": 63488 00:08:09.531 } 00:08:09.531 ] 00:08:09.531 }' 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.531 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.100 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:10.100 23:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.100 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.100 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.100 23:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.100 [2024-11-02 23:48:04.024233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.100 BaseBdev1 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.100 [ 00:08:10.100 { 00:08:10.100 "name": "BaseBdev1", 00:08:10.100 "aliases": [ 00:08:10.100 "d83d0b58-70f9-4449-b744-f4fb155fb9fb" 00:08:10.100 ], 00:08:10.100 "product_name": "Malloc disk", 00:08:10.100 "block_size": 512, 00:08:10.100 "num_blocks": 65536, 00:08:10.100 "uuid": "d83d0b58-70f9-4449-b744-f4fb155fb9fb", 00:08:10.100 "assigned_rate_limits": { 00:08:10.100 "rw_ios_per_sec": 0, 00:08:10.100 "rw_mbytes_per_sec": 0, 00:08:10.100 "r_mbytes_per_sec": 0, 00:08:10.100 "w_mbytes_per_sec": 0 00:08:10.100 }, 00:08:10.100 "claimed": true, 00:08:10.100 "claim_type": "exclusive_write", 00:08:10.100 "zoned": false, 00:08:10.100 "supported_io_types": { 00:08:10.100 "read": true, 00:08:10.100 "write": true, 00:08:10.100 "unmap": true, 00:08:10.100 "flush": true, 00:08:10.100 "reset": true, 00:08:10.100 "nvme_admin": false, 00:08:10.100 "nvme_io": false, 00:08:10.100 "nvme_io_md": false, 00:08:10.100 "write_zeroes": true, 00:08:10.100 "zcopy": true, 00:08:10.100 "get_zone_info": false, 00:08:10.100 "zone_management": false, 00:08:10.100 "zone_append": false, 00:08:10.100 "compare": false, 00:08:10.100 "compare_and_write": false, 00:08:10.100 "abort": true, 00:08:10.100 "seek_hole": false, 00:08:10.100 "seek_data": false, 00:08:10.100 "copy": true, 00:08:10.100 "nvme_iov_md": false 00:08:10.100 }, 00:08:10.100 "memory_domains": [ 00:08:10.100 { 00:08:10.100 "dma_device_id": "system", 00:08:10.100 "dma_device_type": 1 00:08:10.100 }, 00:08:10.100 { 00:08:10.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.100 "dma_device_type": 2 00:08:10.100 } 00:08:10.100 ], 00:08:10.100 "driver_specific": {} 00:08:10.100 } 00:08:10.100 ] 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.100 "name": "Existed_Raid", 00:08:10.100 "uuid": "efe26688-80f0-4194-9951-f186175e6ae5", 00:08:10.100 "strip_size_kb": 64, 00:08:10.100 "state": "configuring", 00:08:10.100 "raid_level": "concat", 00:08:10.100 "superblock": true, 00:08:10.100 "num_base_bdevs": 3, 00:08:10.100 "num_base_bdevs_discovered": 2, 00:08:10.100 "num_base_bdevs_operational": 3, 00:08:10.100 "base_bdevs_list": [ 00:08:10.100 { 00:08:10.100 "name": "BaseBdev1", 00:08:10.100 "uuid": "d83d0b58-70f9-4449-b744-f4fb155fb9fb", 00:08:10.100 "is_configured": true, 00:08:10.100 "data_offset": 2048, 00:08:10.100 "data_size": 63488 00:08:10.100 }, 00:08:10.100 { 00:08:10.100 "name": null, 00:08:10.100 "uuid": "ed1615b5-ccef-4d62-9beb-ef6f83fed265", 00:08:10.100 "is_configured": false, 00:08:10.100 "data_offset": 0, 00:08:10.100 "data_size": 63488 00:08:10.100 }, 00:08:10.100 { 00:08:10.100 "name": "BaseBdev3", 00:08:10.100 "uuid": "4de23e7d-92da-40da-9674-b8e5f9cc1137", 00:08:10.100 "is_configured": true, 00:08:10.100 "data_offset": 2048, 00:08:10.100 "data_size": 63488 00:08:10.100 } 00:08:10.100 ] 00:08:10.100 }' 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.100 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.668 [2024-11-02 23:48:04.555378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.668 "name": "Existed_Raid", 00:08:10.668 "uuid": "efe26688-80f0-4194-9951-f186175e6ae5", 00:08:10.668 "strip_size_kb": 64, 00:08:10.668 "state": "configuring", 00:08:10.668 "raid_level": "concat", 00:08:10.668 "superblock": true, 00:08:10.668 "num_base_bdevs": 3, 00:08:10.668 "num_base_bdevs_discovered": 1, 00:08:10.668 "num_base_bdevs_operational": 3, 00:08:10.668 "base_bdevs_list": [ 00:08:10.668 { 00:08:10.668 "name": "BaseBdev1", 00:08:10.668 "uuid": "d83d0b58-70f9-4449-b744-f4fb155fb9fb", 00:08:10.668 "is_configured": true, 00:08:10.668 "data_offset": 2048, 00:08:10.668 "data_size": 63488 00:08:10.668 }, 00:08:10.668 { 00:08:10.668 "name": null, 00:08:10.668 "uuid": "ed1615b5-ccef-4d62-9beb-ef6f83fed265", 00:08:10.668 "is_configured": false, 00:08:10.668 "data_offset": 0, 00:08:10.668 "data_size": 63488 00:08:10.668 }, 00:08:10.668 { 00:08:10.668 "name": null, 00:08:10.668 "uuid": "4de23e7d-92da-40da-9674-b8e5f9cc1137", 00:08:10.668 "is_configured": false, 00:08:10.668 "data_offset": 0, 00:08:10.668 "data_size": 63488 00:08:10.668 } 00:08:10.668 ] 00:08:10.668 }' 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.668 23:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.927 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.927 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.927 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.927 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.185 [2024-11-02 23:48:05.070527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.185 "name": "Existed_Raid", 00:08:11.185 "uuid": "efe26688-80f0-4194-9951-f186175e6ae5", 00:08:11.185 "strip_size_kb": 64, 00:08:11.185 "state": "configuring", 00:08:11.185 "raid_level": "concat", 00:08:11.185 "superblock": true, 00:08:11.185 "num_base_bdevs": 3, 00:08:11.185 "num_base_bdevs_discovered": 2, 00:08:11.185 "num_base_bdevs_operational": 3, 00:08:11.185 "base_bdevs_list": [ 00:08:11.185 { 00:08:11.185 "name": "BaseBdev1", 00:08:11.185 "uuid": "d83d0b58-70f9-4449-b744-f4fb155fb9fb", 00:08:11.185 "is_configured": true, 00:08:11.185 "data_offset": 2048, 00:08:11.185 "data_size": 63488 00:08:11.185 }, 00:08:11.185 { 00:08:11.185 "name": null, 00:08:11.185 "uuid": "ed1615b5-ccef-4d62-9beb-ef6f83fed265", 00:08:11.185 "is_configured": false, 00:08:11.185 "data_offset": 0, 00:08:11.185 "data_size": 63488 00:08:11.185 }, 00:08:11.185 { 00:08:11.185 "name": "BaseBdev3", 00:08:11.185 "uuid": "4de23e7d-92da-40da-9674-b8e5f9cc1137", 00:08:11.185 "is_configured": true, 00:08:11.185 "data_offset": 2048, 00:08:11.185 "data_size": 63488 00:08:11.185 } 00:08:11.185 ] 00:08:11.185 }' 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.185 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.444 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.444 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:11.444 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.444 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.444 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.444 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:11.444 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.444 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.444 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.703 [2024-11-02 23:48:05.537723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.703 "name": "Existed_Raid", 00:08:11.703 "uuid": "efe26688-80f0-4194-9951-f186175e6ae5", 00:08:11.703 "strip_size_kb": 64, 00:08:11.703 "state": "configuring", 00:08:11.703 "raid_level": "concat", 00:08:11.703 "superblock": true, 00:08:11.703 "num_base_bdevs": 3, 00:08:11.703 "num_base_bdevs_discovered": 1, 00:08:11.703 "num_base_bdevs_operational": 3, 00:08:11.703 "base_bdevs_list": [ 00:08:11.703 { 00:08:11.703 "name": null, 00:08:11.703 "uuid": "d83d0b58-70f9-4449-b744-f4fb155fb9fb", 00:08:11.703 "is_configured": false, 00:08:11.703 "data_offset": 0, 00:08:11.703 "data_size": 63488 00:08:11.703 }, 00:08:11.703 { 00:08:11.703 "name": null, 00:08:11.703 "uuid": "ed1615b5-ccef-4d62-9beb-ef6f83fed265", 00:08:11.703 "is_configured": false, 00:08:11.703 "data_offset": 0, 00:08:11.703 "data_size": 63488 00:08:11.703 }, 00:08:11.703 { 00:08:11.703 "name": "BaseBdev3", 00:08:11.703 "uuid": "4de23e7d-92da-40da-9674-b8e5f9cc1137", 00:08:11.703 "is_configured": true, 00:08:11.703 "data_offset": 2048, 00:08:11.703 "data_size": 63488 00:08:11.703 } 00:08:11.703 ] 00:08:11.703 }' 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.703 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.962 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.962 23:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:11.962 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.962 23:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.962 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.962 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:11.962 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:11.962 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.962 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.962 [2024-11-02 23:48:06.051323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.222 "name": "Existed_Raid", 00:08:12.222 "uuid": "efe26688-80f0-4194-9951-f186175e6ae5", 00:08:12.222 "strip_size_kb": 64, 00:08:12.222 "state": "configuring", 00:08:12.222 "raid_level": "concat", 00:08:12.222 "superblock": true, 00:08:12.222 "num_base_bdevs": 3, 00:08:12.222 "num_base_bdevs_discovered": 2, 00:08:12.222 "num_base_bdevs_operational": 3, 00:08:12.222 "base_bdevs_list": [ 00:08:12.222 { 00:08:12.222 "name": null, 00:08:12.222 "uuid": "d83d0b58-70f9-4449-b744-f4fb155fb9fb", 00:08:12.222 "is_configured": false, 00:08:12.222 "data_offset": 0, 00:08:12.222 "data_size": 63488 00:08:12.222 }, 00:08:12.222 { 00:08:12.222 "name": "BaseBdev2", 00:08:12.222 "uuid": "ed1615b5-ccef-4d62-9beb-ef6f83fed265", 00:08:12.222 "is_configured": true, 00:08:12.222 "data_offset": 2048, 00:08:12.222 "data_size": 63488 00:08:12.222 }, 00:08:12.222 { 00:08:12.222 "name": "BaseBdev3", 00:08:12.222 "uuid": "4de23e7d-92da-40da-9674-b8e5f9cc1137", 00:08:12.222 "is_configured": true, 00:08:12.222 "data_offset": 2048, 00:08:12.222 "data_size": 63488 00:08:12.222 } 00:08:12.222 ] 00:08:12.222 }' 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.222 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.482 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.482 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.482 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.482 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:12.482 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.482 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:12.482 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.482 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:12.482 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.482 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d83d0b58-70f9-4449-b744-f4fb155fb9fb 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.754 [2024-11-02 23:48:06.621131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:12.754 [2024-11-02 23:48:06.621359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:12.754 [2024-11-02 23:48:06.621426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:12.754 [2024-11-02 23:48:06.621692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:12.754 NewBaseBdev 00:08:12.754 [2024-11-02 23:48:06.621855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:12.754 [2024-11-02 23:48:06.621867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:12.754 [2024-11-02 23:48:06.621987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:12.754 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.755 [ 00:08:12.755 { 00:08:12.755 "name": "NewBaseBdev", 00:08:12.755 "aliases": [ 00:08:12.755 "d83d0b58-70f9-4449-b744-f4fb155fb9fb" 00:08:12.755 ], 00:08:12.755 "product_name": "Malloc disk", 00:08:12.755 "block_size": 512, 00:08:12.755 "num_blocks": 65536, 00:08:12.755 "uuid": "d83d0b58-70f9-4449-b744-f4fb155fb9fb", 00:08:12.755 "assigned_rate_limits": { 00:08:12.755 "rw_ios_per_sec": 0, 00:08:12.755 "rw_mbytes_per_sec": 0, 00:08:12.755 "r_mbytes_per_sec": 0, 00:08:12.755 "w_mbytes_per_sec": 0 00:08:12.755 }, 00:08:12.755 "claimed": true, 00:08:12.755 "claim_type": "exclusive_write", 00:08:12.755 "zoned": false, 00:08:12.755 "supported_io_types": { 00:08:12.755 "read": true, 00:08:12.755 "write": true, 00:08:12.755 "unmap": true, 00:08:12.755 "flush": true, 00:08:12.755 "reset": true, 00:08:12.755 "nvme_admin": false, 00:08:12.755 "nvme_io": false, 00:08:12.755 "nvme_io_md": false, 00:08:12.755 "write_zeroes": true, 00:08:12.755 "zcopy": true, 00:08:12.755 "get_zone_info": false, 00:08:12.755 "zone_management": false, 00:08:12.755 "zone_append": false, 00:08:12.755 "compare": false, 00:08:12.755 "compare_and_write": false, 00:08:12.755 "abort": true, 00:08:12.755 "seek_hole": false, 00:08:12.755 "seek_data": false, 00:08:12.755 "copy": true, 00:08:12.755 "nvme_iov_md": false 00:08:12.755 }, 00:08:12.755 "memory_domains": [ 00:08:12.755 { 00:08:12.755 "dma_device_id": "system", 00:08:12.755 "dma_device_type": 1 00:08:12.755 }, 00:08:12.755 { 00:08:12.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.755 "dma_device_type": 2 00:08:12.755 } 00:08:12.755 ], 00:08:12.755 "driver_specific": {} 00:08:12.755 } 00:08:12.755 ] 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.755 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.756 "name": "Existed_Raid", 00:08:12.756 "uuid": "efe26688-80f0-4194-9951-f186175e6ae5", 00:08:12.756 "strip_size_kb": 64, 00:08:12.756 "state": "online", 00:08:12.756 "raid_level": "concat", 00:08:12.756 "superblock": true, 00:08:12.756 "num_base_bdevs": 3, 00:08:12.756 "num_base_bdevs_discovered": 3, 00:08:12.756 "num_base_bdevs_operational": 3, 00:08:12.756 "base_bdevs_list": [ 00:08:12.756 { 00:08:12.756 "name": "NewBaseBdev", 00:08:12.756 "uuid": "d83d0b58-70f9-4449-b744-f4fb155fb9fb", 00:08:12.756 "is_configured": true, 00:08:12.756 "data_offset": 2048, 00:08:12.756 "data_size": 63488 00:08:12.756 }, 00:08:12.756 { 00:08:12.756 "name": "BaseBdev2", 00:08:12.756 "uuid": "ed1615b5-ccef-4d62-9beb-ef6f83fed265", 00:08:12.756 "is_configured": true, 00:08:12.756 "data_offset": 2048, 00:08:12.756 "data_size": 63488 00:08:12.756 }, 00:08:12.756 { 00:08:12.756 "name": "BaseBdev3", 00:08:12.756 "uuid": "4de23e7d-92da-40da-9674-b8e5f9cc1137", 00:08:12.756 "is_configured": true, 00:08:12.756 "data_offset": 2048, 00:08:12.756 "data_size": 63488 00:08:12.756 } 00:08:12.756 ] 00:08:12.756 }' 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.756 23:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.015 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.015 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.015 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.015 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.015 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.015 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.015 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.015 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:13.015 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.015 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.015 [2024-11-02 23:48:07.100672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.275 "name": "Existed_Raid", 00:08:13.275 "aliases": [ 00:08:13.275 "efe26688-80f0-4194-9951-f186175e6ae5" 00:08:13.275 ], 00:08:13.275 "product_name": "Raid Volume", 00:08:13.275 "block_size": 512, 00:08:13.275 "num_blocks": 190464, 00:08:13.275 "uuid": "efe26688-80f0-4194-9951-f186175e6ae5", 00:08:13.275 "assigned_rate_limits": { 00:08:13.275 "rw_ios_per_sec": 0, 00:08:13.275 "rw_mbytes_per_sec": 0, 00:08:13.275 "r_mbytes_per_sec": 0, 00:08:13.275 "w_mbytes_per_sec": 0 00:08:13.275 }, 00:08:13.275 "claimed": false, 00:08:13.275 "zoned": false, 00:08:13.275 "supported_io_types": { 00:08:13.275 "read": true, 00:08:13.275 "write": true, 00:08:13.275 "unmap": true, 00:08:13.275 "flush": true, 00:08:13.275 "reset": true, 00:08:13.275 "nvme_admin": false, 00:08:13.275 "nvme_io": false, 00:08:13.275 "nvme_io_md": false, 00:08:13.275 "write_zeroes": true, 00:08:13.275 "zcopy": false, 00:08:13.275 "get_zone_info": false, 00:08:13.275 "zone_management": false, 00:08:13.275 "zone_append": false, 00:08:13.275 "compare": false, 00:08:13.275 "compare_and_write": false, 00:08:13.275 "abort": false, 00:08:13.275 "seek_hole": false, 00:08:13.275 "seek_data": false, 00:08:13.275 "copy": false, 00:08:13.275 "nvme_iov_md": false 00:08:13.275 }, 00:08:13.275 "memory_domains": [ 00:08:13.275 { 00:08:13.275 "dma_device_id": "system", 00:08:13.275 "dma_device_type": 1 00:08:13.275 }, 00:08:13.275 { 00:08:13.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.275 "dma_device_type": 2 00:08:13.275 }, 00:08:13.275 { 00:08:13.275 "dma_device_id": "system", 00:08:13.275 "dma_device_type": 1 00:08:13.275 }, 00:08:13.275 { 00:08:13.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.275 "dma_device_type": 2 00:08:13.275 }, 00:08:13.275 { 00:08:13.275 "dma_device_id": "system", 00:08:13.275 "dma_device_type": 1 00:08:13.275 }, 00:08:13.275 { 00:08:13.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.275 "dma_device_type": 2 00:08:13.275 } 00:08:13.275 ], 00:08:13.275 "driver_specific": { 00:08:13.275 "raid": { 00:08:13.275 "uuid": "efe26688-80f0-4194-9951-f186175e6ae5", 00:08:13.275 "strip_size_kb": 64, 00:08:13.275 "state": "online", 00:08:13.275 "raid_level": "concat", 00:08:13.275 "superblock": true, 00:08:13.275 "num_base_bdevs": 3, 00:08:13.275 "num_base_bdevs_discovered": 3, 00:08:13.275 "num_base_bdevs_operational": 3, 00:08:13.275 "base_bdevs_list": [ 00:08:13.275 { 00:08:13.275 "name": "NewBaseBdev", 00:08:13.275 "uuid": "d83d0b58-70f9-4449-b744-f4fb155fb9fb", 00:08:13.275 "is_configured": true, 00:08:13.275 "data_offset": 2048, 00:08:13.275 "data_size": 63488 00:08:13.275 }, 00:08:13.275 { 00:08:13.275 "name": "BaseBdev2", 00:08:13.275 "uuid": "ed1615b5-ccef-4d62-9beb-ef6f83fed265", 00:08:13.275 "is_configured": true, 00:08:13.275 "data_offset": 2048, 00:08:13.275 "data_size": 63488 00:08:13.275 }, 00:08:13.275 { 00:08:13.275 "name": "BaseBdev3", 00:08:13.275 "uuid": "4de23e7d-92da-40da-9674-b8e5f9cc1137", 00:08:13.275 "is_configured": true, 00:08:13.275 "data_offset": 2048, 00:08:13.275 "data_size": 63488 00:08:13.275 } 00:08:13.275 ] 00:08:13.275 } 00:08:13.275 } 00:08:13.275 }' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:13.275 BaseBdev2 00:08:13.275 BaseBdev3' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.275 [2024-11-02 23:48:07.355915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.275 [2024-11-02 23:48:07.355978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.275 [2024-11-02 23:48:07.356070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.275 [2024-11-02 23:48:07.356156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.275 [2024-11-02 23:48:07.356211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77159 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 77159 ']' 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 77159 00:08:13.275 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:13.534 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.534 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77159 00:08:13.534 killing process with pid 77159 00:08:13.534 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:13.534 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:13.534 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77159' 00:08:13.534 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 77159 00:08:13.534 [2024-11-02 23:48:07.402272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.534 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 77159 00:08:13.534 [2024-11-02 23:48:07.433307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.796 ************************************ 00:08:13.796 END TEST raid_state_function_test_sb 00:08:13.796 ************************************ 00:08:13.796 23:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:13.796 00:08:13.796 real 0m8.854s 00:08:13.796 user 0m15.198s 00:08:13.796 sys 0m1.783s 00:08:13.796 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.796 23:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.796 23:48:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:13.796 23:48:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:13.796 23:48:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.796 23:48:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.796 ************************************ 00:08:13.796 START TEST raid_superblock_test 00:08:13.796 ************************************ 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77768 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77768 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 77768 ']' 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:13.796 23:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.796 [2024-11-02 23:48:07.799131] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:13.796 [2024-11-02 23:48:07.799354] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77768 ] 00:08:14.058 [2024-11-02 23:48:07.933273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.058 [2024-11-02 23:48:07.960256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.058 [2024-11-02 23:48:08.002438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.058 [2024-11-02 23:48:08.002560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.626 malloc1 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.626 [2024-11-02 23:48:08.664415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.626 [2024-11-02 23:48:08.664513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.626 [2024-11-02 23:48:08.664549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:14.626 [2024-11-02 23:48:08.664581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.626 [2024-11-02 23:48:08.666632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.626 [2024-11-02 23:48:08.666714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.626 pt1 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.626 malloc2 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.626 [2024-11-02 23:48:08.696816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.626 [2024-11-02 23:48:08.696904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.626 [2024-11-02 23:48:08.696936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:14.626 [2024-11-02 23:48:08.696965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.626 [2024-11-02 23:48:08.699087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.626 [2024-11-02 23:48:08.699174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.626 pt2 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.626 malloc3 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:14.626 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.886 [2024-11-02 23:48:08.725254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:14.886 [2024-11-02 23:48:08.725370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.886 [2024-11-02 23:48:08.725409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:14.886 [2024-11-02 23:48:08.725445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.886 [2024-11-02 23:48:08.727533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.886 [2024-11-02 23:48:08.727606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:14.886 pt3 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.886 [2024-11-02 23:48:08.737301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.886 [2024-11-02 23:48:08.739184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.886 [2024-11-02 23:48:08.739277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:14.886 [2024-11-02 23:48:08.739461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:14.886 [2024-11-02 23:48:08.739507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:14.886 [2024-11-02 23:48:08.739796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:14.886 [2024-11-02 23:48:08.739953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:14.886 [2024-11-02 23:48:08.739992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:14.886 [2024-11-02 23:48:08.740138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.886 "name": "raid_bdev1", 00:08:14.886 "uuid": "cff10e35-b5ac-48a4-9b51-446312ded26a", 00:08:14.886 "strip_size_kb": 64, 00:08:14.886 "state": "online", 00:08:14.886 "raid_level": "concat", 00:08:14.886 "superblock": true, 00:08:14.886 "num_base_bdevs": 3, 00:08:14.886 "num_base_bdevs_discovered": 3, 00:08:14.886 "num_base_bdevs_operational": 3, 00:08:14.886 "base_bdevs_list": [ 00:08:14.886 { 00:08:14.886 "name": "pt1", 00:08:14.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.886 "is_configured": true, 00:08:14.886 "data_offset": 2048, 00:08:14.886 "data_size": 63488 00:08:14.886 }, 00:08:14.886 { 00:08:14.886 "name": "pt2", 00:08:14.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.886 "is_configured": true, 00:08:14.886 "data_offset": 2048, 00:08:14.886 "data_size": 63488 00:08:14.886 }, 00:08:14.886 { 00:08:14.886 "name": "pt3", 00:08:14.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.886 "is_configured": true, 00:08:14.886 "data_offset": 2048, 00:08:14.886 "data_size": 63488 00:08:14.886 } 00:08:14.886 ] 00:08:14.886 }' 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.886 23:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.146 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:15.146 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:15.146 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.146 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.146 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.146 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.146 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.146 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.147 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.147 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.147 [2024-11-02 23:48:09.164918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.147 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.147 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.147 "name": "raid_bdev1", 00:08:15.147 "aliases": [ 00:08:15.147 "cff10e35-b5ac-48a4-9b51-446312ded26a" 00:08:15.147 ], 00:08:15.147 "product_name": "Raid Volume", 00:08:15.147 "block_size": 512, 00:08:15.147 "num_blocks": 190464, 00:08:15.147 "uuid": "cff10e35-b5ac-48a4-9b51-446312ded26a", 00:08:15.147 "assigned_rate_limits": { 00:08:15.147 "rw_ios_per_sec": 0, 00:08:15.147 "rw_mbytes_per_sec": 0, 00:08:15.147 "r_mbytes_per_sec": 0, 00:08:15.147 "w_mbytes_per_sec": 0 00:08:15.147 }, 00:08:15.147 "claimed": false, 00:08:15.147 "zoned": false, 00:08:15.147 "supported_io_types": { 00:08:15.147 "read": true, 00:08:15.147 "write": true, 00:08:15.147 "unmap": true, 00:08:15.147 "flush": true, 00:08:15.147 "reset": true, 00:08:15.147 "nvme_admin": false, 00:08:15.147 "nvme_io": false, 00:08:15.147 "nvme_io_md": false, 00:08:15.147 "write_zeroes": true, 00:08:15.147 "zcopy": false, 00:08:15.147 "get_zone_info": false, 00:08:15.147 "zone_management": false, 00:08:15.147 "zone_append": false, 00:08:15.147 "compare": false, 00:08:15.147 "compare_and_write": false, 00:08:15.147 "abort": false, 00:08:15.147 "seek_hole": false, 00:08:15.147 "seek_data": false, 00:08:15.147 "copy": false, 00:08:15.147 "nvme_iov_md": false 00:08:15.147 }, 00:08:15.147 "memory_domains": [ 00:08:15.147 { 00:08:15.147 "dma_device_id": "system", 00:08:15.147 "dma_device_type": 1 00:08:15.147 }, 00:08:15.147 { 00:08:15.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.147 "dma_device_type": 2 00:08:15.147 }, 00:08:15.147 { 00:08:15.147 "dma_device_id": "system", 00:08:15.147 "dma_device_type": 1 00:08:15.147 }, 00:08:15.147 { 00:08:15.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.147 "dma_device_type": 2 00:08:15.147 }, 00:08:15.147 { 00:08:15.147 "dma_device_id": "system", 00:08:15.147 "dma_device_type": 1 00:08:15.147 }, 00:08:15.147 { 00:08:15.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.147 "dma_device_type": 2 00:08:15.147 } 00:08:15.147 ], 00:08:15.147 "driver_specific": { 00:08:15.147 "raid": { 00:08:15.147 "uuid": "cff10e35-b5ac-48a4-9b51-446312ded26a", 00:08:15.147 "strip_size_kb": 64, 00:08:15.147 "state": "online", 00:08:15.147 "raid_level": "concat", 00:08:15.147 "superblock": true, 00:08:15.147 "num_base_bdevs": 3, 00:08:15.147 "num_base_bdevs_discovered": 3, 00:08:15.147 "num_base_bdevs_operational": 3, 00:08:15.147 "base_bdevs_list": [ 00:08:15.147 { 00:08:15.147 "name": "pt1", 00:08:15.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.147 "is_configured": true, 00:08:15.147 "data_offset": 2048, 00:08:15.147 "data_size": 63488 00:08:15.147 }, 00:08:15.147 { 00:08:15.147 "name": "pt2", 00:08:15.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.147 "is_configured": true, 00:08:15.147 "data_offset": 2048, 00:08:15.147 "data_size": 63488 00:08:15.147 }, 00:08:15.147 { 00:08:15.147 "name": "pt3", 00:08:15.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:15.147 "is_configured": true, 00:08:15.147 "data_offset": 2048, 00:08:15.147 "data_size": 63488 00:08:15.147 } 00:08:15.147 ] 00:08:15.147 } 00:08:15.147 } 00:08:15.147 }' 00:08:15.147 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.406 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:15.406 pt2 00:08:15.406 pt3' 00:08:15.406 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.407 [2024-11-02 23:48:09.448336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cff10e35-b5ac-48a4-9b51-446312ded26a 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cff10e35-b5ac-48a4-9b51-446312ded26a ']' 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.407 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.407 [2024-11-02 23:48:09.495981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.407 [2024-11-02 23:48:09.496041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.407 [2024-11-02 23:48:09.496167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.407 [2024-11-02 23:48:09.496256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.407 [2024-11-02 23:48:09.496310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.667 [2024-11-02 23:48:09.647749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:15.667 [2024-11-02 23:48:09.649669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:15.667 [2024-11-02 23:48:09.649762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:15.667 [2024-11-02 23:48:09.649818] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:15.667 [2024-11-02 23:48:09.649858] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:15.667 [2024-11-02 23:48:09.649890] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:15.667 [2024-11-02 23:48:09.649919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.667 [2024-11-02 23:48:09.649932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:15.667 request: 00:08:15.667 { 00:08:15.667 "name": "raid_bdev1", 00:08:15.667 "raid_level": "concat", 00:08:15.667 "base_bdevs": [ 00:08:15.667 "malloc1", 00:08:15.667 "malloc2", 00:08:15.667 "malloc3" 00:08:15.667 ], 00:08:15.667 "strip_size_kb": 64, 00:08:15.667 "superblock": false, 00:08:15.667 "method": "bdev_raid_create", 00:08:15.667 "req_id": 1 00:08:15.667 } 00:08:15.667 Got JSON-RPC error response 00:08:15.667 response: 00:08:15.667 { 00:08:15.667 "code": -17, 00:08:15.667 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:15.667 } 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.667 [2024-11-02 23:48:09.715600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:15.667 [2024-11-02 23:48:09.715689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.667 [2024-11-02 23:48:09.715722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:15.667 [2024-11-02 23:48:09.715792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.667 [2024-11-02 23:48:09.717940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.667 [2024-11-02 23:48:09.718009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:15.667 [2024-11-02 23:48:09.718097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:15.667 [2024-11-02 23:48:09.718176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:15.667 pt1 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.667 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.925 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.925 "name": "raid_bdev1", 00:08:15.925 "uuid": "cff10e35-b5ac-48a4-9b51-446312ded26a", 00:08:15.925 "strip_size_kb": 64, 00:08:15.925 "state": "configuring", 00:08:15.925 "raid_level": "concat", 00:08:15.925 "superblock": true, 00:08:15.925 "num_base_bdevs": 3, 00:08:15.925 "num_base_bdevs_discovered": 1, 00:08:15.925 "num_base_bdevs_operational": 3, 00:08:15.925 "base_bdevs_list": [ 00:08:15.925 { 00:08:15.925 "name": "pt1", 00:08:15.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.925 "is_configured": true, 00:08:15.925 "data_offset": 2048, 00:08:15.925 "data_size": 63488 00:08:15.925 }, 00:08:15.925 { 00:08:15.925 "name": null, 00:08:15.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.926 "is_configured": false, 00:08:15.926 "data_offset": 2048, 00:08:15.926 "data_size": 63488 00:08:15.926 }, 00:08:15.926 { 00:08:15.926 "name": null, 00:08:15.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:15.926 "is_configured": false, 00:08:15.926 "data_offset": 2048, 00:08:15.926 "data_size": 63488 00:08:15.926 } 00:08:15.926 ] 00:08:15.926 }' 00:08:15.926 23:48:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.926 23:48:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.185 [2024-11-02 23:48:10.158887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.185 [2024-11-02 23:48:10.158964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.185 [2024-11-02 23:48:10.158987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:16.185 [2024-11-02 23:48:10.158999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.185 [2024-11-02 23:48:10.159381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.185 [2024-11-02 23:48:10.159399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.185 [2024-11-02 23:48:10.159472] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:16.185 [2024-11-02 23:48:10.159496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.185 pt2 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.185 [2024-11-02 23:48:10.170880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.185 "name": "raid_bdev1", 00:08:16.185 "uuid": "cff10e35-b5ac-48a4-9b51-446312ded26a", 00:08:16.185 "strip_size_kb": 64, 00:08:16.185 "state": "configuring", 00:08:16.185 "raid_level": "concat", 00:08:16.185 "superblock": true, 00:08:16.185 "num_base_bdevs": 3, 00:08:16.185 "num_base_bdevs_discovered": 1, 00:08:16.185 "num_base_bdevs_operational": 3, 00:08:16.185 "base_bdevs_list": [ 00:08:16.185 { 00:08:16.185 "name": "pt1", 00:08:16.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.185 "is_configured": true, 00:08:16.185 "data_offset": 2048, 00:08:16.185 "data_size": 63488 00:08:16.185 }, 00:08:16.185 { 00:08:16.185 "name": null, 00:08:16.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.185 "is_configured": false, 00:08:16.185 "data_offset": 0, 00:08:16.185 "data_size": 63488 00:08:16.185 }, 00:08:16.185 { 00:08:16.185 "name": null, 00:08:16.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:16.185 "is_configured": false, 00:08:16.185 "data_offset": 2048, 00:08:16.185 "data_size": 63488 00:08:16.185 } 00:08:16.185 ] 00:08:16.185 }' 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.185 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.754 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:16.754 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.754 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.754 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 [2024-11-02 23:48:10.654064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.755 [2024-11-02 23:48:10.654184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.755 [2024-11-02 23:48:10.654225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:16.755 [2024-11-02 23:48:10.654253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.755 [2024-11-02 23:48:10.654690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.755 [2024-11-02 23:48:10.654772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.755 [2024-11-02 23:48:10.654892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:16.755 [2024-11-02 23:48:10.654945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.755 pt2 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 [2024-11-02 23:48:10.666025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:16.755 [2024-11-02 23:48:10.666103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.755 [2024-11-02 23:48:10.666138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:16.755 [2024-11-02 23:48:10.666164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.755 [2024-11-02 23:48:10.666522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.755 [2024-11-02 23:48:10.666576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:16.755 [2024-11-02 23:48:10.666660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:16.755 [2024-11-02 23:48:10.666704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:16.755 [2024-11-02 23:48:10.666838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:16.755 [2024-11-02 23:48:10.666880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:16.755 [2024-11-02 23:48:10.667127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:16.755 [2024-11-02 23:48:10.667266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:16.755 [2024-11-02 23:48:10.667307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:16.755 [2024-11-02 23:48:10.667440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.755 pt3 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.755 "name": "raid_bdev1", 00:08:16.755 "uuid": "cff10e35-b5ac-48a4-9b51-446312ded26a", 00:08:16.755 "strip_size_kb": 64, 00:08:16.755 "state": "online", 00:08:16.755 "raid_level": "concat", 00:08:16.755 "superblock": true, 00:08:16.755 "num_base_bdevs": 3, 00:08:16.755 "num_base_bdevs_discovered": 3, 00:08:16.755 "num_base_bdevs_operational": 3, 00:08:16.755 "base_bdevs_list": [ 00:08:16.755 { 00:08:16.755 "name": "pt1", 00:08:16.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.755 "is_configured": true, 00:08:16.755 "data_offset": 2048, 00:08:16.755 "data_size": 63488 00:08:16.755 }, 00:08:16.755 { 00:08:16.755 "name": "pt2", 00:08:16.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.755 "is_configured": true, 00:08:16.755 "data_offset": 2048, 00:08:16.755 "data_size": 63488 00:08:16.755 }, 00:08:16.755 { 00:08:16.755 "name": "pt3", 00:08:16.755 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:16.755 "is_configured": true, 00:08:16.755 "data_offset": 2048, 00:08:16.755 "data_size": 63488 00:08:16.755 } 00:08:16.755 ] 00:08:16.755 }' 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.755 23:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.015 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:17.015 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:17.015 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.015 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.015 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.015 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.015 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.015 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:17.015 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.015 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.274 [2024-11-02 23:48:11.109647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.274 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.274 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.274 "name": "raid_bdev1", 00:08:17.274 "aliases": [ 00:08:17.274 "cff10e35-b5ac-48a4-9b51-446312ded26a" 00:08:17.274 ], 00:08:17.274 "product_name": "Raid Volume", 00:08:17.274 "block_size": 512, 00:08:17.274 "num_blocks": 190464, 00:08:17.274 "uuid": "cff10e35-b5ac-48a4-9b51-446312ded26a", 00:08:17.274 "assigned_rate_limits": { 00:08:17.274 "rw_ios_per_sec": 0, 00:08:17.274 "rw_mbytes_per_sec": 0, 00:08:17.274 "r_mbytes_per_sec": 0, 00:08:17.274 "w_mbytes_per_sec": 0 00:08:17.274 }, 00:08:17.274 "claimed": false, 00:08:17.274 "zoned": false, 00:08:17.274 "supported_io_types": { 00:08:17.274 "read": true, 00:08:17.274 "write": true, 00:08:17.274 "unmap": true, 00:08:17.274 "flush": true, 00:08:17.274 "reset": true, 00:08:17.274 "nvme_admin": false, 00:08:17.274 "nvme_io": false, 00:08:17.274 "nvme_io_md": false, 00:08:17.274 "write_zeroes": true, 00:08:17.274 "zcopy": false, 00:08:17.274 "get_zone_info": false, 00:08:17.274 "zone_management": false, 00:08:17.274 "zone_append": false, 00:08:17.274 "compare": false, 00:08:17.274 "compare_and_write": false, 00:08:17.274 "abort": false, 00:08:17.274 "seek_hole": false, 00:08:17.274 "seek_data": false, 00:08:17.274 "copy": false, 00:08:17.275 "nvme_iov_md": false 00:08:17.275 }, 00:08:17.275 "memory_domains": [ 00:08:17.275 { 00:08:17.275 "dma_device_id": "system", 00:08:17.275 "dma_device_type": 1 00:08:17.275 }, 00:08:17.275 { 00:08:17.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.275 "dma_device_type": 2 00:08:17.275 }, 00:08:17.275 { 00:08:17.275 "dma_device_id": "system", 00:08:17.275 "dma_device_type": 1 00:08:17.275 }, 00:08:17.275 { 00:08:17.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.275 "dma_device_type": 2 00:08:17.275 }, 00:08:17.275 { 00:08:17.275 "dma_device_id": "system", 00:08:17.275 "dma_device_type": 1 00:08:17.275 }, 00:08:17.275 { 00:08:17.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.275 "dma_device_type": 2 00:08:17.275 } 00:08:17.275 ], 00:08:17.275 "driver_specific": { 00:08:17.275 "raid": { 00:08:17.275 "uuid": "cff10e35-b5ac-48a4-9b51-446312ded26a", 00:08:17.275 "strip_size_kb": 64, 00:08:17.275 "state": "online", 00:08:17.275 "raid_level": "concat", 00:08:17.275 "superblock": true, 00:08:17.275 "num_base_bdevs": 3, 00:08:17.275 "num_base_bdevs_discovered": 3, 00:08:17.275 "num_base_bdevs_operational": 3, 00:08:17.275 "base_bdevs_list": [ 00:08:17.275 { 00:08:17.275 "name": "pt1", 00:08:17.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:17.275 "is_configured": true, 00:08:17.275 "data_offset": 2048, 00:08:17.275 "data_size": 63488 00:08:17.275 }, 00:08:17.275 { 00:08:17.275 "name": "pt2", 00:08:17.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:17.275 "is_configured": true, 00:08:17.275 "data_offset": 2048, 00:08:17.275 "data_size": 63488 00:08:17.275 }, 00:08:17.275 { 00:08:17.275 "name": "pt3", 00:08:17.275 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:17.275 "is_configured": true, 00:08:17.275 "data_offset": 2048, 00:08:17.275 "data_size": 63488 00:08:17.275 } 00:08:17.275 ] 00:08:17.275 } 00:08:17.275 } 00:08:17.275 }' 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:17.275 pt2 00:08:17.275 pt3' 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.275 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:17.535 [2024-11-02 23:48:11.377087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cff10e35-b5ac-48a4-9b51-446312ded26a '!=' cff10e35-b5ac-48a4-9b51-446312ded26a ']' 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77768 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 77768 ']' 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 77768 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77768 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77768' 00:08:17.535 killing process with pid 77768 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 77768 00:08:17.535 [2024-11-02 23:48:11.466073] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.535 [2024-11-02 23:48:11.466163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.535 [2024-11-02 23:48:11.466230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.535 [2024-11-02 23:48:11.466245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:17.535 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 77768 00:08:17.535 [2024-11-02 23:48:11.499400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.796 ************************************ 00:08:17.796 END TEST raid_superblock_test 00:08:17.796 ************************************ 00:08:17.796 23:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:17.796 00:08:17.796 real 0m3.988s 00:08:17.796 user 0m6.341s 00:08:17.796 sys 0m0.833s 00:08:17.796 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.796 23:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.796 23:48:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:17.796 23:48:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:17.796 23:48:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.796 23:48:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.796 ************************************ 00:08:17.796 START TEST raid_read_error_test 00:08:17.796 ************************************ 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UjtTlFkAif 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78010 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78010 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 78010 ']' 00:08:17.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:17.796 23:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.796 [2024-11-02 23:48:11.870516] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:17.796 [2024-11-02 23:48:11.870629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78010 ] 00:08:18.055 [2024-11-02 23:48:12.002325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.055 [2024-11-02 23:48:12.029706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.055 [2024-11-02 23:48:12.071680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.055 [2024-11-02 23:48:12.071822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.637 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.637 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:18.637 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.637 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.637 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.637 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.903 BaseBdev1_malloc 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.903 true 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.903 [2024-11-02 23:48:12.751402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.903 [2024-11-02 23:48:12.751549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.903 [2024-11-02 23:48:12.751619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:18.903 [2024-11-02 23:48:12.751698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.903 [2024-11-02 23:48:12.754380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.903 [2024-11-02 23:48:12.754475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.903 BaseBdev1 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.903 BaseBdev2_malloc 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.903 true 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.903 [2024-11-02 23:48:12.792262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:18.903 [2024-11-02 23:48:12.792351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.903 [2024-11-02 23:48:12.792387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:18.903 [2024-11-02 23:48:12.792425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.903 [2024-11-02 23:48:12.794463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.903 [2024-11-02 23:48:12.794534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:18.903 BaseBdev2 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.903 BaseBdev3_malloc 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.903 true 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.903 [2024-11-02 23:48:12.832894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:18.903 [2024-11-02 23:48:12.832982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.903 [2024-11-02 23:48:12.833005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:18.903 [2024-11-02 23:48:12.833015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.903 [2024-11-02 23:48:12.835115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.903 [2024-11-02 23:48:12.835150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:18.903 BaseBdev3 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.903 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.903 [2024-11-02 23:48:12.844953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.903 [2024-11-02 23:48:12.846828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.903 [2024-11-02 23:48:12.846940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:18.903 [2024-11-02 23:48:12.847143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:18.903 [2024-11-02 23:48:12.847191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:18.903 [2024-11-02 23:48:12.847482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:18.904 [2024-11-02 23:48:12.847643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:18.904 [2024-11-02 23:48:12.847683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:18.904 [2024-11-02 23:48:12.847863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.904 "name": "raid_bdev1", 00:08:18.904 "uuid": "dc130eeb-06cd-4f6b-bfea-f87b0f96df5c", 00:08:18.904 "strip_size_kb": 64, 00:08:18.904 "state": "online", 00:08:18.904 "raid_level": "concat", 00:08:18.904 "superblock": true, 00:08:18.904 "num_base_bdevs": 3, 00:08:18.904 "num_base_bdevs_discovered": 3, 00:08:18.904 "num_base_bdevs_operational": 3, 00:08:18.904 "base_bdevs_list": [ 00:08:18.904 { 00:08:18.904 "name": "BaseBdev1", 00:08:18.904 "uuid": "bb692ccf-eb4a-5a38-b3c1-7fd667662b84", 00:08:18.904 "is_configured": true, 00:08:18.904 "data_offset": 2048, 00:08:18.904 "data_size": 63488 00:08:18.904 }, 00:08:18.904 { 00:08:18.904 "name": "BaseBdev2", 00:08:18.904 "uuid": "5ad68244-de3d-504c-961f-7ce134d51caa", 00:08:18.904 "is_configured": true, 00:08:18.904 "data_offset": 2048, 00:08:18.904 "data_size": 63488 00:08:18.904 }, 00:08:18.904 { 00:08:18.904 "name": "BaseBdev3", 00:08:18.904 "uuid": "feb32bbb-ed0a-5491-8f42-324068c04263", 00:08:18.904 "is_configured": true, 00:08:18.904 "data_offset": 2048, 00:08:18.904 "data_size": 63488 00:08:18.904 } 00:08:18.904 ] 00:08:18.904 }' 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.904 23:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.472 23:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:19.472 23:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:19.472 [2024-11-02 23:48:13.360423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.411 "name": "raid_bdev1", 00:08:20.411 "uuid": "dc130eeb-06cd-4f6b-bfea-f87b0f96df5c", 00:08:20.411 "strip_size_kb": 64, 00:08:20.411 "state": "online", 00:08:20.411 "raid_level": "concat", 00:08:20.411 "superblock": true, 00:08:20.411 "num_base_bdevs": 3, 00:08:20.411 "num_base_bdevs_discovered": 3, 00:08:20.411 "num_base_bdevs_operational": 3, 00:08:20.411 "base_bdevs_list": [ 00:08:20.411 { 00:08:20.411 "name": "BaseBdev1", 00:08:20.411 "uuid": "bb692ccf-eb4a-5a38-b3c1-7fd667662b84", 00:08:20.411 "is_configured": true, 00:08:20.411 "data_offset": 2048, 00:08:20.411 "data_size": 63488 00:08:20.411 }, 00:08:20.411 { 00:08:20.411 "name": "BaseBdev2", 00:08:20.411 "uuid": "5ad68244-de3d-504c-961f-7ce134d51caa", 00:08:20.411 "is_configured": true, 00:08:20.411 "data_offset": 2048, 00:08:20.411 "data_size": 63488 00:08:20.411 }, 00:08:20.411 { 00:08:20.411 "name": "BaseBdev3", 00:08:20.411 "uuid": "feb32bbb-ed0a-5491-8f42-324068c04263", 00:08:20.411 "is_configured": true, 00:08:20.411 "data_offset": 2048, 00:08:20.411 "data_size": 63488 00:08:20.411 } 00:08:20.411 ] 00:08:20.411 }' 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.411 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.670 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.670 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.670 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.670 [2024-11-02 23:48:14.752686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.670 [2024-11-02 23:48:14.752794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.670 [2024-11-02 23:48:14.755547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.670 [2024-11-02 23:48:14.755641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.670 [2024-11-02 23:48:14.755711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.670 [2024-11-02 23:48:14.755779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:20.670 { 00:08:20.670 "results": [ 00:08:20.670 { 00:08:20.670 "job": "raid_bdev1", 00:08:20.670 "core_mask": "0x1", 00:08:20.670 "workload": "randrw", 00:08:20.670 "percentage": 50, 00:08:20.670 "status": "finished", 00:08:20.670 "queue_depth": 1, 00:08:20.670 "io_size": 131072, 00:08:20.670 "runtime": 1.393314, 00:08:20.670 "iops": 16440.6587459826, 00:08:20.670 "mibps": 2055.082343247825, 00:08:20.670 "io_failed": 1, 00:08:20.670 "io_timeout": 0, 00:08:20.670 "avg_latency_us": 84.37668318994604, 00:08:20.670 "min_latency_us": 25.4882096069869, 00:08:20.670 "max_latency_us": 1495.3082969432314 00:08:20.670 } 00:08:20.670 ], 00:08:20.670 "core_count": 1 00:08:20.670 } 00:08:20.670 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.670 23:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78010 00:08:20.670 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 78010 ']' 00:08:20.670 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 78010 00:08:20.670 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:20.929 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.929 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78010 00:08:20.929 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:20.929 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:20.929 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78010' 00:08:20.929 killing process with pid 78010 00:08:20.929 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 78010 00:08:20.929 [2024-11-02 23:48:14.798084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.929 23:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 78010 00:08:20.929 [2024-11-02 23:48:14.823350] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.195 23:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:21.195 23:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UjtTlFkAif 00:08:21.195 23:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:21.195 23:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:21.195 23:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:21.195 23:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.195 23:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.195 23:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:21.195 00:08:21.195 real 0m3.270s 00:08:21.195 user 0m4.187s 00:08:21.195 sys 0m0.501s 00:08:21.195 23:48:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.195 23:48:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.195 ************************************ 00:08:21.195 END TEST raid_read_error_test 00:08:21.195 ************************************ 00:08:21.195 23:48:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:21.195 23:48:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:21.195 23:48:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.195 23:48:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.195 ************************************ 00:08:21.195 START TEST raid_write_error_test 00:08:21.195 ************************************ 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SmW2TnldpQ 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78139 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78139 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 78139 ']' 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.195 23:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.195 [2024-11-02 23:48:15.216048] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:21.195 [2024-11-02 23:48:15.216196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78139 ] 00:08:21.454 [2024-11-02 23:48:15.368933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.454 [2024-11-02 23:48:15.394150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.454 [2024-11-02 23:48:15.436049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.454 [2024-11-02 23:48:15.436080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.022 BaseBdev1_malloc 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.022 true 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.022 [2024-11-02 23:48:16.089296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.022 [2024-11-02 23:48:16.089405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.022 [2024-11-02 23:48:16.089443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:22.022 [2024-11-02 23:48:16.089471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.022 [2024-11-02 23:48:16.091547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.022 [2024-11-02 23:48:16.091617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.022 BaseBdev1 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.022 BaseBdev2_malloc 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.022 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.282 true 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.282 [2024-11-02 23:48:16.129526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:22.282 [2024-11-02 23:48:16.129614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.282 [2024-11-02 23:48:16.129648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:22.282 [2024-11-02 23:48:16.129718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.282 [2024-11-02 23:48:16.131852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.282 [2024-11-02 23:48:16.131933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:22.282 BaseBdev2 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.282 BaseBdev3_malloc 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.282 true 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.282 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.282 [2024-11-02 23:48:16.170009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:22.282 [2024-11-02 23:48:16.170098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.282 [2024-11-02 23:48:16.170135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:22.282 [2024-11-02 23:48:16.170166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.283 [2024-11-02 23:48:16.172298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.283 [2024-11-02 23:48:16.172367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:22.283 BaseBdev3 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.283 [2024-11-02 23:48:16.182065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.283 [2024-11-02 23:48:16.183975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.283 [2024-11-02 23:48:16.184085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.283 [2024-11-02 23:48:16.184287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:22.283 [2024-11-02 23:48:16.184336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.283 [2024-11-02 23:48:16.184583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:22.283 [2024-11-02 23:48:16.184713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:22.283 [2024-11-02 23:48:16.184723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:22.283 [2024-11-02 23:48:16.184859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.283 "name": "raid_bdev1", 00:08:22.283 "uuid": "592da2ee-51ba-4acf-aeb9-b82e80e48c17", 00:08:22.283 "strip_size_kb": 64, 00:08:22.283 "state": "online", 00:08:22.283 "raid_level": "concat", 00:08:22.283 "superblock": true, 00:08:22.283 "num_base_bdevs": 3, 00:08:22.283 "num_base_bdevs_discovered": 3, 00:08:22.283 "num_base_bdevs_operational": 3, 00:08:22.283 "base_bdevs_list": [ 00:08:22.283 { 00:08:22.283 "name": "BaseBdev1", 00:08:22.283 "uuid": "a35b3477-bcea-5b82-8035-31e99f155b46", 00:08:22.283 "is_configured": true, 00:08:22.283 "data_offset": 2048, 00:08:22.283 "data_size": 63488 00:08:22.283 }, 00:08:22.283 { 00:08:22.283 "name": "BaseBdev2", 00:08:22.283 "uuid": "bbb93491-624c-59d9-8521-4f94bdfdae86", 00:08:22.283 "is_configured": true, 00:08:22.283 "data_offset": 2048, 00:08:22.283 "data_size": 63488 00:08:22.283 }, 00:08:22.283 { 00:08:22.283 "name": "BaseBdev3", 00:08:22.283 "uuid": "892de5ee-c1c6-506e-b5ba-4e248a9dddcf", 00:08:22.283 "is_configured": true, 00:08:22.283 "data_offset": 2048, 00:08:22.283 "data_size": 63488 00:08:22.283 } 00:08:22.283 ] 00:08:22.283 }' 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.283 23:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.543 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:22.543 23:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:22.810 [2024-11-02 23:48:16.689611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.757 "name": "raid_bdev1", 00:08:23.757 "uuid": "592da2ee-51ba-4acf-aeb9-b82e80e48c17", 00:08:23.757 "strip_size_kb": 64, 00:08:23.757 "state": "online", 00:08:23.757 "raid_level": "concat", 00:08:23.757 "superblock": true, 00:08:23.757 "num_base_bdevs": 3, 00:08:23.757 "num_base_bdevs_discovered": 3, 00:08:23.757 "num_base_bdevs_operational": 3, 00:08:23.757 "base_bdevs_list": [ 00:08:23.757 { 00:08:23.757 "name": "BaseBdev1", 00:08:23.757 "uuid": "a35b3477-bcea-5b82-8035-31e99f155b46", 00:08:23.757 "is_configured": true, 00:08:23.757 "data_offset": 2048, 00:08:23.757 "data_size": 63488 00:08:23.757 }, 00:08:23.757 { 00:08:23.757 "name": "BaseBdev2", 00:08:23.757 "uuid": "bbb93491-624c-59d9-8521-4f94bdfdae86", 00:08:23.757 "is_configured": true, 00:08:23.757 "data_offset": 2048, 00:08:23.757 "data_size": 63488 00:08:23.757 }, 00:08:23.757 { 00:08:23.757 "name": "BaseBdev3", 00:08:23.757 "uuid": "892de5ee-c1c6-506e-b5ba-4e248a9dddcf", 00:08:23.757 "is_configured": true, 00:08:23.757 "data_offset": 2048, 00:08:23.757 "data_size": 63488 00:08:23.757 } 00:08:23.757 ] 00:08:23.757 }' 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.757 23:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.018 [2024-11-02 23:48:18.045943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.018 [2024-11-02 23:48:18.046026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.018 [2024-11-02 23:48:18.048759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.018 [2024-11-02 23:48:18.048854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.018 [2024-11-02 23:48:18.048909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.018 [2024-11-02 23:48:18.048981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:24.018 { 00:08:24.018 "results": [ 00:08:24.018 { 00:08:24.018 "job": "raid_bdev1", 00:08:24.018 "core_mask": "0x1", 00:08:24.018 "workload": "randrw", 00:08:24.018 "percentage": 50, 00:08:24.018 "status": "finished", 00:08:24.018 "queue_depth": 1, 00:08:24.018 "io_size": 131072, 00:08:24.018 "runtime": 1.356991, 00:08:24.018 "iops": 16378.148418080886, 00:08:24.018 "mibps": 2047.2685522601107, 00:08:24.018 "io_failed": 1, 00:08:24.018 "io_timeout": 0, 00:08:24.018 "avg_latency_us": 84.60054360191081, 00:08:24.018 "min_latency_us": 25.041048034934498, 00:08:24.018 "max_latency_us": 1438.071615720524 00:08:24.018 } 00:08:24.018 ], 00:08:24.018 "core_count": 1 00:08:24.018 } 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78139 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 78139 ']' 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 78139 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78139 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:24.018 killing process with pid 78139 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78139' 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 78139 00:08:24.018 [2024-11-02 23:48:18.085355] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.018 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 78139 00:08:24.278 [2024-11-02 23:48:18.113290] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.278 23:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SmW2TnldpQ 00:08:24.278 23:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:24.278 23:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:24.278 23:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:24.278 23:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:24.278 23:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.278 23:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.278 23:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:24.278 00:08:24.278 real 0m3.209s 00:08:24.278 user 0m4.068s 00:08:24.278 sys 0m0.486s 00:08:24.278 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.278 23:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.278 ************************************ 00:08:24.278 END TEST raid_write_error_test 00:08:24.278 ************************************ 00:08:24.538 23:48:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:24.538 23:48:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:24.538 23:48:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:24.538 23:48:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.538 23:48:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.538 ************************************ 00:08:24.538 START TEST raid_state_function_test 00:08:24.538 ************************************ 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78266 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78266' 00:08:24.538 Process raid pid: 78266 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78266 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 78266 ']' 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.538 23:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.538 [2024-11-02 23:48:18.503240] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:24.538 [2024-11-02 23:48:18.503457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.797 [2024-11-02 23:48:18.656841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.798 [2024-11-02 23:48:18.681782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.798 [2024-11-02 23:48:18.723350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.798 [2024-11-02 23:48:18.723382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.366 [2024-11-02 23:48:19.336044] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.366 [2024-11-02 23:48:19.336107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.366 [2024-11-02 23:48:19.336122] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.366 [2024-11-02 23:48:19.336131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.366 [2024-11-02 23:48:19.336138] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:25.366 [2024-11-02 23:48:19.336149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.366 "name": "Existed_Raid", 00:08:25.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.366 "strip_size_kb": 0, 00:08:25.366 "state": "configuring", 00:08:25.366 "raid_level": "raid1", 00:08:25.366 "superblock": false, 00:08:25.366 "num_base_bdevs": 3, 00:08:25.366 "num_base_bdevs_discovered": 0, 00:08:25.366 "num_base_bdevs_operational": 3, 00:08:25.366 "base_bdevs_list": [ 00:08:25.366 { 00:08:25.366 "name": "BaseBdev1", 00:08:25.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.366 "is_configured": false, 00:08:25.366 "data_offset": 0, 00:08:25.366 "data_size": 0 00:08:25.366 }, 00:08:25.366 { 00:08:25.366 "name": "BaseBdev2", 00:08:25.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.366 "is_configured": false, 00:08:25.366 "data_offset": 0, 00:08:25.366 "data_size": 0 00:08:25.366 }, 00:08:25.366 { 00:08:25.366 "name": "BaseBdev3", 00:08:25.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.366 "is_configured": false, 00:08:25.366 "data_offset": 0, 00:08:25.366 "data_size": 0 00:08:25.366 } 00:08:25.366 ] 00:08:25.366 }' 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.366 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.938 [2024-11-02 23:48:19.819181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.938 [2024-11-02 23:48:19.819272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.938 [2024-11-02 23:48:19.831156] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.938 [2024-11-02 23:48:19.831251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.938 [2024-11-02 23:48:19.831264] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.938 [2024-11-02 23:48:19.831273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.938 [2024-11-02 23:48:19.831280] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:25.938 [2024-11-02 23:48:19.831289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.938 [2024-11-02 23:48:19.851762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.938 BaseBdev1 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.938 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.938 [ 00:08:25.938 { 00:08:25.938 "name": "BaseBdev1", 00:08:25.938 "aliases": [ 00:08:25.938 "32303f9e-e1d5-4398-8e3f-b9190706bdf4" 00:08:25.938 ], 00:08:25.938 "product_name": "Malloc disk", 00:08:25.938 "block_size": 512, 00:08:25.938 "num_blocks": 65536, 00:08:25.938 "uuid": "32303f9e-e1d5-4398-8e3f-b9190706bdf4", 00:08:25.938 "assigned_rate_limits": { 00:08:25.938 "rw_ios_per_sec": 0, 00:08:25.938 "rw_mbytes_per_sec": 0, 00:08:25.938 "r_mbytes_per_sec": 0, 00:08:25.938 "w_mbytes_per_sec": 0 00:08:25.938 }, 00:08:25.938 "claimed": true, 00:08:25.938 "claim_type": "exclusive_write", 00:08:25.938 "zoned": false, 00:08:25.938 "supported_io_types": { 00:08:25.938 "read": true, 00:08:25.938 "write": true, 00:08:25.938 "unmap": true, 00:08:25.938 "flush": true, 00:08:25.938 "reset": true, 00:08:25.938 "nvme_admin": false, 00:08:25.938 "nvme_io": false, 00:08:25.938 "nvme_io_md": false, 00:08:25.938 "write_zeroes": true, 00:08:25.938 "zcopy": true, 00:08:25.939 "get_zone_info": false, 00:08:25.939 "zone_management": false, 00:08:25.939 "zone_append": false, 00:08:25.939 "compare": false, 00:08:25.939 "compare_and_write": false, 00:08:25.939 "abort": true, 00:08:25.939 "seek_hole": false, 00:08:25.939 "seek_data": false, 00:08:25.939 "copy": true, 00:08:25.939 "nvme_iov_md": false 00:08:25.939 }, 00:08:25.939 "memory_domains": [ 00:08:25.939 { 00:08:25.939 "dma_device_id": "system", 00:08:25.939 "dma_device_type": 1 00:08:25.939 }, 00:08:25.939 { 00:08:25.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.939 "dma_device_type": 2 00:08:25.939 } 00:08:25.939 ], 00:08:25.939 "driver_specific": {} 00:08:25.939 } 00:08:25.939 ] 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.939 "name": "Existed_Raid", 00:08:25.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.939 "strip_size_kb": 0, 00:08:25.939 "state": "configuring", 00:08:25.939 "raid_level": "raid1", 00:08:25.939 "superblock": false, 00:08:25.939 "num_base_bdevs": 3, 00:08:25.939 "num_base_bdevs_discovered": 1, 00:08:25.939 "num_base_bdevs_operational": 3, 00:08:25.939 "base_bdevs_list": [ 00:08:25.939 { 00:08:25.939 "name": "BaseBdev1", 00:08:25.939 "uuid": "32303f9e-e1d5-4398-8e3f-b9190706bdf4", 00:08:25.939 "is_configured": true, 00:08:25.939 "data_offset": 0, 00:08:25.939 "data_size": 65536 00:08:25.939 }, 00:08:25.939 { 00:08:25.939 "name": "BaseBdev2", 00:08:25.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.939 "is_configured": false, 00:08:25.939 "data_offset": 0, 00:08:25.939 "data_size": 0 00:08:25.939 }, 00:08:25.939 { 00:08:25.939 "name": "BaseBdev3", 00:08:25.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.939 "is_configured": false, 00:08:25.939 "data_offset": 0, 00:08:25.939 "data_size": 0 00:08:25.939 } 00:08:25.939 ] 00:08:25.939 }' 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.939 23:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.508 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.508 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.508 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.508 [2024-11-02 23:48:20.315020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.508 [2024-11-02 23:48:20.315135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:26.508 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.508 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.508 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.508 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.508 [2024-11-02 23:48:20.327036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.508 [2024-11-02 23:48:20.328931] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.508 [2024-11-02 23:48:20.328974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.508 [2024-11-02 23:48:20.328984] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:26.508 [2024-11-02 23:48:20.328993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:26.508 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.509 "name": "Existed_Raid", 00:08:26.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.509 "strip_size_kb": 0, 00:08:26.509 "state": "configuring", 00:08:26.509 "raid_level": "raid1", 00:08:26.509 "superblock": false, 00:08:26.509 "num_base_bdevs": 3, 00:08:26.509 "num_base_bdevs_discovered": 1, 00:08:26.509 "num_base_bdevs_operational": 3, 00:08:26.509 "base_bdevs_list": [ 00:08:26.509 { 00:08:26.509 "name": "BaseBdev1", 00:08:26.509 "uuid": "32303f9e-e1d5-4398-8e3f-b9190706bdf4", 00:08:26.509 "is_configured": true, 00:08:26.509 "data_offset": 0, 00:08:26.509 "data_size": 65536 00:08:26.509 }, 00:08:26.509 { 00:08:26.509 "name": "BaseBdev2", 00:08:26.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.509 "is_configured": false, 00:08:26.509 "data_offset": 0, 00:08:26.509 "data_size": 0 00:08:26.509 }, 00:08:26.509 { 00:08:26.509 "name": "BaseBdev3", 00:08:26.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.509 "is_configured": false, 00:08:26.509 "data_offset": 0, 00:08:26.509 "data_size": 0 00:08:26.509 } 00:08:26.509 ] 00:08:26.509 }' 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.509 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.769 [2024-11-02 23:48:20.797040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.769 BaseBdev2 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.769 [ 00:08:26.769 { 00:08:26.769 "name": "BaseBdev2", 00:08:26.769 "aliases": [ 00:08:26.769 "0431c0e5-eda5-48a8-82c9-e301ffaa94a1" 00:08:26.769 ], 00:08:26.769 "product_name": "Malloc disk", 00:08:26.769 "block_size": 512, 00:08:26.769 "num_blocks": 65536, 00:08:26.769 "uuid": "0431c0e5-eda5-48a8-82c9-e301ffaa94a1", 00:08:26.769 "assigned_rate_limits": { 00:08:26.769 "rw_ios_per_sec": 0, 00:08:26.769 "rw_mbytes_per_sec": 0, 00:08:26.769 "r_mbytes_per_sec": 0, 00:08:26.769 "w_mbytes_per_sec": 0 00:08:26.769 }, 00:08:26.769 "claimed": true, 00:08:26.769 "claim_type": "exclusive_write", 00:08:26.769 "zoned": false, 00:08:26.769 "supported_io_types": { 00:08:26.769 "read": true, 00:08:26.769 "write": true, 00:08:26.769 "unmap": true, 00:08:26.769 "flush": true, 00:08:26.769 "reset": true, 00:08:26.769 "nvme_admin": false, 00:08:26.769 "nvme_io": false, 00:08:26.769 "nvme_io_md": false, 00:08:26.769 "write_zeroes": true, 00:08:26.769 "zcopy": true, 00:08:26.769 "get_zone_info": false, 00:08:26.769 "zone_management": false, 00:08:26.769 "zone_append": false, 00:08:26.769 "compare": false, 00:08:26.769 "compare_and_write": false, 00:08:26.769 "abort": true, 00:08:26.769 "seek_hole": false, 00:08:26.769 "seek_data": false, 00:08:26.769 "copy": true, 00:08:26.769 "nvme_iov_md": false 00:08:26.769 }, 00:08:26.769 "memory_domains": [ 00:08:26.769 { 00:08:26.769 "dma_device_id": "system", 00:08:26.769 "dma_device_type": 1 00:08:26.769 }, 00:08:26.769 { 00:08:26.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.769 "dma_device_type": 2 00:08:26.769 } 00:08:26.769 ], 00:08:26.769 "driver_specific": {} 00:08:26.769 } 00:08:26.769 ] 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.769 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.029 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.029 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.029 "name": "Existed_Raid", 00:08:27.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.029 "strip_size_kb": 0, 00:08:27.029 "state": "configuring", 00:08:27.029 "raid_level": "raid1", 00:08:27.029 "superblock": false, 00:08:27.029 "num_base_bdevs": 3, 00:08:27.029 "num_base_bdevs_discovered": 2, 00:08:27.029 "num_base_bdevs_operational": 3, 00:08:27.029 "base_bdevs_list": [ 00:08:27.029 { 00:08:27.029 "name": "BaseBdev1", 00:08:27.029 "uuid": "32303f9e-e1d5-4398-8e3f-b9190706bdf4", 00:08:27.029 "is_configured": true, 00:08:27.029 "data_offset": 0, 00:08:27.029 "data_size": 65536 00:08:27.029 }, 00:08:27.029 { 00:08:27.029 "name": "BaseBdev2", 00:08:27.029 "uuid": "0431c0e5-eda5-48a8-82c9-e301ffaa94a1", 00:08:27.029 "is_configured": true, 00:08:27.029 "data_offset": 0, 00:08:27.029 "data_size": 65536 00:08:27.029 }, 00:08:27.029 { 00:08:27.029 "name": "BaseBdev3", 00:08:27.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.029 "is_configured": false, 00:08:27.029 "data_offset": 0, 00:08:27.029 "data_size": 0 00:08:27.029 } 00:08:27.029 ] 00:08:27.029 }' 00:08:27.029 23:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.029 23:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.289 [2024-11-02 23:48:21.321514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.289 [2024-11-02 23:48:21.321663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:27.289 [2024-11-02 23:48:21.321706] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:27.289 [2024-11-02 23:48:21.322085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:27.289 [2024-11-02 23:48:21.322328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:27.289 [2024-11-02 23:48:21.322420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:27.289 [2024-11-02 23:48:21.322767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.289 BaseBdev3 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.289 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.289 [ 00:08:27.289 { 00:08:27.289 "name": "BaseBdev3", 00:08:27.289 "aliases": [ 00:08:27.290 "6a5c4669-2490-4382-8fb9-d49a95074762" 00:08:27.290 ], 00:08:27.290 "product_name": "Malloc disk", 00:08:27.290 "block_size": 512, 00:08:27.290 "num_blocks": 65536, 00:08:27.290 "uuid": "6a5c4669-2490-4382-8fb9-d49a95074762", 00:08:27.290 "assigned_rate_limits": { 00:08:27.290 "rw_ios_per_sec": 0, 00:08:27.290 "rw_mbytes_per_sec": 0, 00:08:27.290 "r_mbytes_per_sec": 0, 00:08:27.290 "w_mbytes_per_sec": 0 00:08:27.290 }, 00:08:27.290 "claimed": true, 00:08:27.290 "claim_type": "exclusive_write", 00:08:27.290 "zoned": false, 00:08:27.290 "supported_io_types": { 00:08:27.290 "read": true, 00:08:27.290 "write": true, 00:08:27.290 "unmap": true, 00:08:27.290 "flush": true, 00:08:27.290 "reset": true, 00:08:27.290 "nvme_admin": false, 00:08:27.290 "nvme_io": false, 00:08:27.290 "nvme_io_md": false, 00:08:27.290 "write_zeroes": true, 00:08:27.290 "zcopy": true, 00:08:27.290 "get_zone_info": false, 00:08:27.290 "zone_management": false, 00:08:27.290 "zone_append": false, 00:08:27.290 "compare": false, 00:08:27.290 "compare_and_write": false, 00:08:27.290 "abort": true, 00:08:27.290 "seek_hole": false, 00:08:27.290 "seek_data": false, 00:08:27.290 "copy": true, 00:08:27.290 "nvme_iov_md": false 00:08:27.290 }, 00:08:27.290 "memory_domains": [ 00:08:27.290 { 00:08:27.290 "dma_device_id": "system", 00:08:27.290 "dma_device_type": 1 00:08:27.290 }, 00:08:27.290 { 00:08:27.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.290 "dma_device_type": 2 00:08:27.290 } 00:08:27.290 ], 00:08:27.290 "driver_specific": {} 00:08:27.290 } 00:08:27.290 ] 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.290 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.557 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.557 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.557 "name": "Existed_Raid", 00:08:27.557 "uuid": "8fc7b18f-0db6-4a3e-8c55-f972fd3edfd4", 00:08:27.557 "strip_size_kb": 0, 00:08:27.557 "state": "online", 00:08:27.557 "raid_level": "raid1", 00:08:27.557 "superblock": false, 00:08:27.557 "num_base_bdevs": 3, 00:08:27.557 "num_base_bdevs_discovered": 3, 00:08:27.557 "num_base_bdevs_operational": 3, 00:08:27.557 "base_bdevs_list": [ 00:08:27.557 { 00:08:27.557 "name": "BaseBdev1", 00:08:27.557 "uuid": "32303f9e-e1d5-4398-8e3f-b9190706bdf4", 00:08:27.557 "is_configured": true, 00:08:27.557 "data_offset": 0, 00:08:27.557 "data_size": 65536 00:08:27.557 }, 00:08:27.557 { 00:08:27.557 "name": "BaseBdev2", 00:08:27.557 "uuid": "0431c0e5-eda5-48a8-82c9-e301ffaa94a1", 00:08:27.557 "is_configured": true, 00:08:27.557 "data_offset": 0, 00:08:27.557 "data_size": 65536 00:08:27.557 }, 00:08:27.557 { 00:08:27.557 "name": "BaseBdev3", 00:08:27.557 "uuid": "6a5c4669-2490-4382-8fb9-d49a95074762", 00:08:27.557 "is_configured": true, 00:08:27.557 "data_offset": 0, 00:08:27.557 "data_size": 65536 00:08:27.557 } 00:08:27.557 ] 00:08:27.557 }' 00:08:27.557 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.557 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.826 [2024-11-02 23:48:21.828991] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.826 "name": "Existed_Raid", 00:08:27.826 "aliases": [ 00:08:27.826 "8fc7b18f-0db6-4a3e-8c55-f972fd3edfd4" 00:08:27.826 ], 00:08:27.826 "product_name": "Raid Volume", 00:08:27.826 "block_size": 512, 00:08:27.826 "num_blocks": 65536, 00:08:27.826 "uuid": "8fc7b18f-0db6-4a3e-8c55-f972fd3edfd4", 00:08:27.826 "assigned_rate_limits": { 00:08:27.826 "rw_ios_per_sec": 0, 00:08:27.826 "rw_mbytes_per_sec": 0, 00:08:27.826 "r_mbytes_per_sec": 0, 00:08:27.826 "w_mbytes_per_sec": 0 00:08:27.826 }, 00:08:27.826 "claimed": false, 00:08:27.826 "zoned": false, 00:08:27.826 "supported_io_types": { 00:08:27.826 "read": true, 00:08:27.826 "write": true, 00:08:27.826 "unmap": false, 00:08:27.826 "flush": false, 00:08:27.826 "reset": true, 00:08:27.826 "nvme_admin": false, 00:08:27.826 "nvme_io": false, 00:08:27.826 "nvme_io_md": false, 00:08:27.826 "write_zeroes": true, 00:08:27.826 "zcopy": false, 00:08:27.826 "get_zone_info": false, 00:08:27.826 "zone_management": false, 00:08:27.826 "zone_append": false, 00:08:27.826 "compare": false, 00:08:27.826 "compare_and_write": false, 00:08:27.826 "abort": false, 00:08:27.826 "seek_hole": false, 00:08:27.826 "seek_data": false, 00:08:27.826 "copy": false, 00:08:27.826 "nvme_iov_md": false 00:08:27.826 }, 00:08:27.826 "memory_domains": [ 00:08:27.826 { 00:08:27.826 "dma_device_id": "system", 00:08:27.826 "dma_device_type": 1 00:08:27.826 }, 00:08:27.826 { 00:08:27.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.826 "dma_device_type": 2 00:08:27.826 }, 00:08:27.826 { 00:08:27.826 "dma_device_id": "system", 00:08:27.826 "dma_device_type": 1 00:08:27.826 }, 00:08:27.826 { 00:08:27.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.826 "dma_device_type": 2 00:08:27.826 }, 00:08:27.826 { 00:08:27.826 "dma_device_id": "system", 00:08:27.826 "dma_device_type": 1 00:08:27.826 }, 00:08:27.826 { 00:08:27.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.826 "dma_device_type": 2 00:08:27.826 } 00:08:27.826 ], 00:08:27.826 "driver_specific": { 00:08:27.826 "raid": { 00:08:27.826 "uuid": "8fc7b18f-0db6-4a3e-8c55-f972fd3edfd4", 00:08:27.826 "strip_size_kb": 0, 00:08:27.826 "state": "online", 00:08:27.826 "raid_level": "raid1", 00:08:27.826 "superblock": false, 00:08:27.826 "num_base_bdevs": 3, 00:08:27.826 "num_base_bdevs_discovered": 3, 00:08:27.826 "num_base_bdevs_operational": 3, 00:08:27.826 "base_bdevs_list": [ 00:08:27.826 { 00:08:27.826 "name": "BaseBdev1", 00:08:27.826 "uuid": "32303f9e-e1d5-4398-8e3f-b9190706bdf4", 00:08:27.826 "is_configured": true, 00:08:27.826 "data_offset": 0, 00:08:27.826 "data_size": 65536 00:08:27.826 }, 00:08:27.826 { 00:08:27.826 "name": "BaseBdev2", 00:08:27.826 "uuid": "0431c0e5-eda5-48a8-82c9-e301ffaa94a1", 00:08:27.826 "is_configured": true, 00:08:27.826 "data_offset": 0, 00:08:27.826 "data_size": 65536 00:08:27.826 }, 00:08:27.826 { 00:08:27.826 "name": "BaseBdev3", 00:08:27.826 "uuid": "6a5c4669-2490-4382-8fb9-d49a95074762", 00:08:27.826 "is_configured": true, 00:08:27.826 "data_offset": 0, 00:08:27.826 "data_size": 65536 00:08:27.826 } 00:08:27.826 ] 00:08:27.826 } 00:08:27.826 } 00:08:27.826 }' 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:27.826 BaseBdev2 00:08:27.826 BaseBdev3' 00:08:27.826 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.085 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.085 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.085 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:28.085 23:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.085 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.085 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.085 23:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.085 [2024-11-02 23:48:22.088252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.085 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.086 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.086 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.086 "name": "Existed_Raid", 00:08:28.086 "uuid": "8fc7b18f-0db6-4a3e-8c55-f972fd3edfd4", 00:08:28.086 "strip_size_kb": 0, 00:08:28.086 "state": "online", 00:08:28.086 "raid_level": "raid1", 00:08:28.086 "superblock": false, 00:08:28.086 "num_base_bdevs": 3, 00:08:28.086 "num_base_bdevs_discovered": 2, 00:08:28.086 "num_base_bdevs_operational": 2, 00:08:28.086 "base_bdevs_list": [ 00:08:28.086 { 00:08:28.086 "name": null, 00:08:28.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.086 "is_configured": false, 00:08:28.086 "data_offset": 0, 00:08:28.086 "data_size": 65536 00:08:28.086 }, 00:08:28.086 { 00:08:28.086 "name": "BaseBdev2", 00:08:28.086 "uuid": "0431c0e5-eda5-48a8-82c9-e301ffaa94a1", 00:08:28.086 "is_configured": true, 00:08:28.086 "data_offset": 0, 00:08:28.086 "data_size": 65536 00:08:28.086 }, 00:08:28.086 { 00:08:28.086 "name": "BaseBdev3", 00:08:28.086 "uuid": "6a5c4669-2490-4382-8fb9-d49a95074762", 00:08:28.086 "is_configured": true, 00:08:28.086 "data_offset": 0, 00:08:28.086 "data_size": 65536 00:08:28.086 } 00:08:28.086 ] 00:08:28.086 }' 00:08:28.086 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.086 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 [2024-11-02 23:48:22.570670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 [2024-11-02 23:48:22.629687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.655 [2024-11-02 23:48:22.629828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.655 [2024-11-02 23:48:22.641101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.655 [2024-11-02 23:48:22.641203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.655 [2024-11-02 23:48:22.641223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 BaseBdev2 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 [ 00:08:28.655 { 00:08:28.655 "name": "BaseBdev2", 00:08:28.655 "aliases": [ 00:08:28.655 "412e1947-99c6-48a7-8208-4b72e700d96a" 00:08:28.655 ], 00:08:28.655 "product_name": "Malloc disk", 00:08:28.655 "block_size": 512, 00:08:28.655 "num_blocks": 65536, 00:08:28.655 "uuid": "412e1947-99c6-48a7-8208-4b72e700d96a", 00:08:28.655 "assigned_rate_limits": { 00:08:28.655 "rw_ios_per_sec": 0, 00:08:28.655 "rw_mbytes_per_sec": 0, 00:08:28.655 "r_mbytes_per_sec": 0, 00:08:28.655 "w_mbytes_per_sec": 0 00:08:28.655 }, 00:08:28.655 "claimed": false, 00:08:28.655 "zoned": false, 00:08:28.655 "supported_io_types": { 00:08:28.655 "read": true, 00:08:28.655 "write": true, 00:08:28.655 "unmap": true, 00:08:28.655 "flush": true, 00:08:28.655 "reset": true, 00:08:28.655 "nvme_admin": false, 00:08:28.655 "nvme_io": false, 00:08:28.655 "nvme_io_md": false, 00:08:28.655 "write_zeroes": true, 00:08:28.655 "zcopy": true, 00:08:28.655 "get_zone_info": false, 00:08:28.655 "zone_management": false, 00:08:28.655 "zone_append": false, 00:08:28.655 "compare": false, 00:08:28.655 "compare_and_write": false, 00:08:28.655 "abort": true, 00:08:28.655 "seek_hole": false, 00:08:28.655 "seek_data": false, 00:08:28.655 "copy": true, 00:08:28.655 "nvme_iov_md": false 00:08:28.655 }, 00:08:28.655 "memory_domains": [ 00:08:28.655 { 00:08:28.655 "dma_device_id": "system", 00:08:28.655 "dma_device_type": 1 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.655 "dma_device_type": 2 00:08:28.655 } 00:08:28.655 ], 00:08:28.655 "driver_specific": {} 00:08:28.655 } 00:08:28.655 ] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 BaseBdev3 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.655 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.656 [ 00:08:28.656 { 00:08:28.656 "name": "BaseBdev3", 00:08:28.656 "aliases": [ 00:08:28.656 "e9ce8e24-06af-4b96-921f-faba8a364aad" 00:08:28.656 ], 00:08:28.656 "product_name": "Malloc disk", 00:08:28.656 "block_size": 512, 00:08:28.656 "num_blocks": 65536, 00:08:28.656 "uuid": "e9ce8e24-06af-4b96-921f-faba8a364aad", 00:08:28.656 "assigned_rate_limits": { 00:08:28.656 "rw_ios_per_sec": 0, 00:08:28.656 "rw_mbytes_per_sec": 0, 00:08:28.656 "r_mbytes_per_sec": 0, 00:08:28.656 "w_mbytes_per_sec": 0 00:08:28.656 }, 00:08:28.656 "claimed": false, 00:08:28.656 "zoned": false, 00:08:28.656 "supported_io_types": { 00:08:28.656 "read": true, 00:08:28.656 "write": true, 00:08:28.656 "unmap": true, 00:08:28.656 "flush": true, 00:08:28.656 "reset": true, 00:08:28.656 "nvme_admin": false, 00:08:28.656 "nvme_io": false, 00:08:28.656 "nvme_io_md": false, 00:08:28.656 "write_zeroes": true, 00:08:28.656 "zcopy": true, 00:08:28.656 "get_zone_info": false, 00:08:28.656 "zone_management": false, 00:08:28.656 "zone_append": false, 00:08:28.656 "compare": false, 00:08:28.656 "compare_and_write": false, 00:08:28.656 "abort": true, 00:08:28.656 "seek_hole": false, 00:08:28.656 "seek_data": false, 00:08:28.656 "copy": true, 00:08:28.656 "nvme_iov_md": false 00:08:28.656 }, 00:08:28.656 "memory_domains": [ 00:08:28.656 { 00:08:28.656 "dma_device_id": "system", 00:08:28.656 "dma_device_type": 1 00:08:28.656 }, 00:08:28.656 { 00:08:28.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.656 "dma_device_type": 2 00:08:28.656 } 00:08:28.656 ], 00:08:28.656 "driver_specific": {} 00:08:28.656 } 00:08:28.656 ] 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.656 [2024-11-02 23:48:22.740610] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.656 [2024-11-02 23:48:22.740693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.656 [2024-11-02 23:48:22.740735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.656 [2024-11-02 23:48:22.742577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.656 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.926 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.926 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.927 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.927 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.927 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.927 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.927 "name": "Existed_Raid", 00:08:28.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.927 "strip_size_kb": 0, 00:08:28.927 "state": "configuring", 00:08:28.927 "raid_level": "raid1", 00:08:28.927 "superblock": false, 00:08:28.927 "num_base_bdevs": 3, 00:08:28.927 "num_base_bdevs_discovered": 2, 00:08:28.927 "num_base_bdevs_operational": 3, 00:08:28.927 "base_bdevs_list": [ 00:08:28.927 { 00:08:28.927 "name": "BaseBdev1", 00:08:28.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.927 "is_configured": false, 00:08:28.927 "data_offset": 0, 00:08:28.927 "data_size": 0 00:08:28.927 }, 00:08:28.927 { 00:08:28.927 "name": "BaseBdev2", 00:08:28.927 "uuid": "412e1947-99c6-48a7-8208-4b72e700d96a", 00:08:28.927 "is_configured": true, 00:08:28.927 "data_offset": 0, 00:08:28.927 "data_size": 65536 00:08:28.927 }, 00:08:28.927 { 00:08:28.927 "name": "BaseBdev3", 00:08:28.927 "uuid": "e9ce8e24-06af-4b96-921f-faba8a364aad", 00:08:28.927 "is_configured": true, 00:08:28.927 "data_offset": 0, 00:08:28.927 "data_size": 65536 00:08:28.927 } 00:08:28.927 ] 00:08:28.927 }' 00:08:28.927 23:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.927 23:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.193 [2024-11-02 23:48:23.175881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.193 "name": "Existed_Raid", 00:08:29.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.193 "strip_size_kb": 0, 00:08:29.193 "state": "configuring", 00:08:29.193 "raid_level": "raid1", 00:08:29.193 "superblock": false, 00:08:29.193 "num_base_bdevs": 3, 00:08:29.193 "num_base_bdevs_discovered": 1, 00:08:29.193 "num_base_bdevs_operational": 3, 00:08:29.193 "base_bdevs_list": [ 00:08:29.193 { 00:08:29.193 "name": "BaseBdev1", 00:08:29.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.193 "is_configured": false, 00:08:29.193 "data_offset": 0, 00:08:29.193 "data_size": 0 00:08:29.193 }, 00:08:29.193 { 00:08:29.193 "name": null, 00:08:29.193 "uuid": "412e1947-99c6-48a7-8208-4b72e700d96a", 00:08:29.193 "is_configured": false, 00:08:29.193 "data_offset": 0, 00:08:29.193 "data_size": 65536 00:08:29.193 }, 00:08:29.193 { 00:08:29.193 "name": "BaseBdev3", 00:08:29.193 "uuid": "e9ce8e24-06af-4b96-921f-faba8a364aad", 00:08:29.193 "is_configured": true, 00:08:29.193 "data_offset": 0, 00:08:29.193 "data_size": 65536 00:08:29.193 } 00:08:29.193 ] 00:08:29.193 }' 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.193 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.762 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.762 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.762 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.762 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.763 [2024-11-02 23:48:23.649808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.763 BaseBdev1 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.763 [ 00:08:29.763 { 00:08:29.763 "name": "BaseBdev1", 00:08:29.763 "aliases": [ 00:08:29.763 "c2c464e7-9992-4911-80bc-276da94648af" 00:08:29.763 ], 00:08:29.763 "product_name": "Malloc disk", 00:08:29.763 "block_size": 512, 00:08:29.763 "num_blocks": 65536, 00:08:29.763 "uuid": "c2c464e7-9992-4911-80bc-276da94648af", 00:08:29.763 "assigned_rate_limits": { 00:08:29.763 "rw_ios_per_sec": 0, 00:08:29.763 "rw_mbytes_per_sec": 0, 00:08:29.763 "r_mbytes_per_sec": 0, 00:08:29.763 "w_mbytes_per_sec": 0 00:08:29.763 }, 00:08:29.763 "claimed": true, 00:08:29.763 "claim_type": "exclusive_write", 00:08:29.763 "zoned": false, 00:08:29.763 "supported_io_types": { 00:08:29.763 "read": true, 00:08:29.763 "write": true, 00:08:29.763 "unmap": true, 00:08:29.763 "flush": true, 00:08:29.763 "reset": true, 00:08:29.763 "nvme_admin": false, 00:08:29.763 "nvme_io": false, 00:08:29.763 "nvme_io_md": false, 00:08:29.763 "write_zeroes": true, 00:08:29.763 "zcopy": true, 00:08:29.763 "get_zone_info": false, 00:08:29.763 "zone_management": false, 00:08:29.763 "zone_append": false, 00:08:29.763 "compare": false, 00:08:29.763 "compare_and_write": false, 00:08:29.763 "abort": true, 00:08:29.763 "seek_hole": false, 00:08:29.763 "seek_data": false, 00:08:29.763 "copy": true, 00:08:29.763 "nvme_iov_md": false 00:08:29.763 }, 00:08:29.763 "memory_domains": [ 00:08:29.763 { 00:08:29.763 "dma_device_id": "system", 00:08:29.763 "dma_device_type": 1 00:08:29.763 }, 00:08:29.763 { 00:08:29.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.763 "dma_device_type": 2 00:08:29.763 } 00:08:29.763 ], 00:08:29.763 "driver_specific": {} 00:08:29.763 } 00:08:29.763 ] 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.763 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.763 "name": "Existed_Raid", 00:08:29.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.763 "strip_size_kb": 0, 00:08:29.763 "state": "configuring", 00:08:29.763 "raid_level": "raid1", 00:08:29.763 "superblock": false, 00:08:29.763 "num_base_bdevs": 3, 00:08:29.763 "num_base_bdevs_discovered": 2, 00:08:29.763 "num_base_bdevs_operational": 3, 00:08:29.763 "base_bdevs_list": [ 00:08:29.763 { 00:08:29.763 "name": "BaseBdev1", 00:08:29.763 "uuid": "c2c464e7-9992-4911-80bc-276da94648af", 00:08:29.763 "is_configured": true, 00:08:29.763 "data_offset": 0, 00:08:29.763 "data_size": 65536 00:08:29.763 }, 00:08:29.763 { 00:08:29.763 "name": null, 00:08:29.763 "uuid": "412e1947-99c6-48a7-8208-4b72e700d96a", 00:08:29.763 "is_configured": false, 00:08:29.763 "data_offset": 0, 00:08:29.763 "data_size": 65536 00:08:29.763 }, 00:08:29.763 { 00:08:29.763 "name": "BaseBdev3", 00:08:29.763 "uuid": "e9ce8e24-06af-4b96-921f-faba8a364aad", 00:08:29.763 "is_configured": true, 00:08:29.763 "data_offset": 0, 00:08:29.764 "data_size": 65536 00:08:29.764 } 00:08:29.764 ] 00:08:29.764 }' 00:08:29.764 23:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.764 23:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.333 [2024-11-02 23:48:24.196930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.333 "name": "Existed_Raid", 00:08:30.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.333 "strip_size_kb": 0, 00:08:30.333 "state": "configuring", 00:08:30.333 "raid_level": "raid1", 00:08:30.333 "superblock": false, 00:08:30.333 "num_base_bdevs": 3, 00:08:30.333 "num_base_bdevs_discovered": 1, 00:08:30.333 "num_base_bdevs_operational": 3, 00:08:30.333 "base_bdevs_list": [ 00:08:30.333 { 00:08:30.333 "name": "BaseBdev1", 00:08:30.333 "uuid": "c2c464e7-9992-4911-80bc-276da94648af", 00:08:30.333 "is_configured": true, 00:08:30.333 "data_offset": 0, 00:08:30.333 "data_size": 65536 00:08:30.333 }, 00:08:30.333 { 00:08:30.333 "name": null, 00:08:30.333 "uuid": "412e1947-99c6-48a7-8208-4b72e700d96a", 00:08:30.333 "is_configured": false, 00:08:30.333 "data_offset": 0, 00:08:30.333 "data_size": 65536 00:08:30.333 }, 00:08:30.333 { 00:08:30.333 "name": null, 00:08:30.333 "uuid": "e9ce8e24-06af-4b96-921f-faba8a364aad", 00:08:30.333 "is_configured": false, 00:08:30.333 "data_offset": 0, 00:08:30.333 "data_size": 65536 00:08:30.333 } 00:08:30.333 ] 00:08:30.333 }' 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.333 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.593 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:30.593 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.593 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.593 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.593 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.593 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:30.593 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:30.593 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.593 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.853 [2024-11-02 23:48:24.688121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.853 "name": "Existed_Raid", 00:08:30.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.853 "strip_size_kb": 0, 00:08:30.853 "state": "configuring", 00:08:30.853 "raid_level": "raid1", 00:08:30.853 "superblock": false, 00:08:30.853 "num_base_bdevs": 3, 00:08:30.853 "num_base_bdevs_discovered": 2, 00:08:30.853 "num_base_bdevs_operational": 3, 00:08:30.853 "base_bdevs_list": [ 00:08:30.853 { 00:08:30.853 "name": "BaseBdev1", 00:08:30.853 "uuid": "c2c464e7-9992-4911-80bc-276da94648af", 00:08:30.853 "is_configured": true, 00:08:30.853 "data_offset": 0, 00:08:30.853 "data_size": 65536 00:08:30.853 }, 00:08:30.853 { 00:08:30.853 "name": null, 00:08:30.853 "uuid": "412e1947-99c6-48a7-8208-4b72e700d96a", 00:08:30.853 "is_configured": false, 00:08:30.853 "data_offset": 0, 00:08:30.853 "data_size": 65536 00:08:30.853 }, 00:08:30.853 { 00:08:30.853 "name": "BaseBdev3", 00:08:30.853 "uuid": "e9ce8e24-06af-4b96-921f-faba8a364aad", 00:08:30.853 "is_configured": true, 00:08:30.853 "data_offset": 0, 00:08:30.853 "data_size": 65536 00:08:30.853 } 00:08:30.853 ] 00:08:30.853 }' 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.853 23:48:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.112 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.112 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.112 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.112 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:31.112 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.113 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:31.113 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:31.113 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.113 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.113 [2024-11-02 23:48:25.199269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.372 "name": "Existed_Raid", 00:08:31.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.372 "strip_size_kb": 0, 00:08:31.372 "state": "configuring", 00:08:31.372 "raid_level": "raid1", 00:08:31.372 "superblock": false, 00:08:31.372 "num_base_bdevs": 3, 00:08:31.372 "num_base_bdevs_discovered": 1, 00:08:31.372 "num_base_bdevs_operational": 3, 00:08:31.372 "base_bdevs_list": [ 00:08:31.372 { 00:08:31.372 "name": null, 00:08:31.372 "uuid": "c2c464e7-9992-4911-80bc-276da94648af", 00:08:31.372 "is_configured": false, 00:08:31.372 "data_offset": 0, 00:08:31.372 "data_size": 65536 00:08:31.372 }, 00:08:31.372 { 00:08:31.372 "name": null, 00:08:31.372 "uuid": "412e1947-99c6-48a7-8208-4b72e700d96a", 00:08:31.372 "is_configured": false, 00:08:31.372 "data_offset": 0, 00:08:31.372 "data_size": 65536 00:08:31.372 }, 00:08:31.372 { 00:08:31.372 "name": "BaseBdev3", 00:08:31.372 "uuid": "e9ce8e24-06af-4b96-921f-faba8a364aad", 00:08:31.372 "is_configured": true, 00:08:31.372 "data_offset": 0, 00:08:31.372 "data_size": 65536 00:08:31.372 } 00:08:31.372 ] 00:08:31.372 }' 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.372 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.632 [2024-11-02 23:48:25.676928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.632 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.891 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.891 "name": "Existed_Raid", 00:08:31.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.891 "strip_size_kb": 0, 00:08:31.891 "state": "configuring", 00:08:31.891 "raid_level": "raid1", 00:08:31.891 "superblock": false, 00:08:31.891 "num_base_bdevs": 3, 00:08:31.891 "num_base_bdevs_discovered": 2, 00:08:31.891 "num_base_bdevs_operational": 3, 00:08:31.891 "base_bdevs_list": [ 00:08:31.891 { 00:08:31.891 "name": null, 00:08:31.891 "uuid": "c2c464e7-9992-4911-80bc-276da94648af", 00:08:31.891 "is_configured": false, 00:08:31.891 "data_offset": 0, 00:08:31.891 "data_size": 65536 00:08:31.891 }, 00:08:31.891 { 00:08:31.891 "name": "BaseBdev2", 00:08:31.891 "uuid": "412e1947-99c6-48a7-8208-4b72e700d96a", 00:08:31.891 "is_configured": true, 00:08:31.891 "data_offset": 0, 00:08:31.891 "data_size": 65536 00:08:31.891 }, 00:08:31.891 { 00:08:31.891 "name": "BaseBdev3", 00:08:31.891 "uuid": "e9ce8e24-06af-4b96-921f-faba8a364aad", 00:08:31.891 "is_configured": true, 00:08:31.891 "data_offset": 0, 00:08:31.891 "data_size": 65536 00:08:31.891 } 00:08:31.891 ] 00:08:31.891 }' 00:08:31.891 23:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.891 23:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c2c464e7-9992-4911-80bc-276da94648af 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.150 [2024-11-02 23:48:26.214857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:32.150 [2024-11-02 23:48:26.214963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:32.150 [2024-11-02 23:48:26.214988] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:32.150 [2024-11-02 23:48:26.215269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:32.150 [2024-11-02 23:48:26.215425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:32.150 [2024-11-02 23:48:26.215470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:32.150 [2024-11-02 23:48:26.215677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.150 NewBaseBdev 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:32.150 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:32.151 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:32.151 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:32.151 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:32.151 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.151 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.151 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.151 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:32.151 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.151 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.151 [ 00:08:32.151 { 00:08:32.151 "name": "NewBaseBdev", 00:08:32.151 "aliases": [ 00:08:32.151 "c2c464e7-9992-4911-80bc-276da94648af" 00:08:32.151 ], 00:08:32.151 "product_name": "Malloc disk", 00:08:32.151 "block_size": 512, 00:08:32.151 "num_blocks": 65536, 00:08:32.151 "uuid": "c2c464e7-9992-4911-80bc-276da94648af", 00:08:32.151 "assigned_rate_limits": { 00:08:32.151 "rw_ios_per_sec": 0, 00:08:32.410 "rw_mbytes_per_sec": 0, 00:08:32.410 "r_mbytes_per_sec": 0, 00:08:32.410 "w_mbytes_per_sec": 0 00:08:32.410 }, 00:08:32.410 "claimed": true, 00:08:32.410 "claim_type": "exclusive_write", 00:08:32.410 "zoned": false, 00:08:32.410 "supported_io_types": { 00:08:32.410 "read": true, 00:08:32.410 "write": true, 00:08:32.410 "unmap": true, 00:08:32.410 "flush": true, 00:08:32.410 "reset": true, 00:08:32.410 "nvme_admin": false, 00:08:32.410 "nvme_io": false, 00:08:32.410 "nvme_io_md": false, 00:08:32.410 "write_zeroes": true, 00:08:32.410 "zcopy": true, 00:08:32.410 "get_zone_info": false, 00:08:32.410 "zone_management": false, 00:08:32.410 "zone_append": false, 00:08:32.410 "compare": false, 00:08:32.410 "compare_and_write": false, 00:08:32.410 "abort": true, 00:08:32.410 "seek_hole": false, 00:08:32.410 "seek_data": false, 00:08:32.410 "copy": true, 00:08:32.410 "nvme_iov_md": false 00:08:32.410 }, 00:08:32.410 "memory_domains": [ 00:08:32.410 { 00:08:32.410 "dma_device_id": "system", 00:08:32.410 "dma_device_type": 1 00:08:32.410 }, 00:08:32.410 { 00:08:32.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.410 "dma_device_type": 2 00:08:32.410 } 00:08:32.410 ], 00:08:32.410 "driver_specific": {} 00:08:32.410 } 00:08:32.410 ] 00:08:32.410 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.410 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.411 "name": "Existed_Raid", 00:08:32.411 "uuid": "e1926584-751e-40eb-8065-626e7c452bd4", 00:08:32.411 "strip_size_kb": 0, 00:08:32.411 "state": "online", 00:08:32.411 "raid_level": "raid1", 00:08:32.411 "superblock": false, 00:08:32.411 "num_base_bdevs": 3, 00:08:32.411 "num_base_bdevs_discovered": 3, 00:08:32.411 "num_base_bdevs_operational": 3, 00:08:32.411 "base_bdevs_list": [ 00:08:32.411 { 00:08:32.411 "name": "NewBaseBdev", 00:08:32.411 "uuid": "c2c464e7-9992-4911-80bc-276da94648af", 00:08:32.411 "is_configured": true, 00:08:32.411 "data_offset": 0, 00:08:32.411 "data_size": 65536 00:08:32.411 }, 00:08:32.411 { 00:08:32.411 "name": "BaseBdev2", 00:08:32.411 "uuid": "412e1947-99c6-48a7-8208-4b72e700d96a", 00:08:32.411 "is_configured": true, 00:08:32.411 "data_offset": 0, 00:08:32.411 "data_size": 65536 00:08:32.411 }, 00:08:32.411 { 00:08:32.411 "name": "BaseBdev3", 00:08:32.411 "uuid": "e9ce8e24-06af-4b96-921f-faba8a364aad", 00:08:32.411 "is_configured": true, 00:08:32.411 "data_offset": 0, 00:08:32.411 "data_size": 65536 00:08:32.411 } 00:08:32.411 ] 00:08:32.411 }' 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.411 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.717 [2024-11-02 23:48:26.726357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.717 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.717 "name": "Existed_Raid", 00:08:32.717 "aliases": [ 00:08:32.717 "e1926584-751e-40eb-8065-626e7c452bd4" 00:08:32.717 ], 00:08:32.717 "product_name": "Raid Volume", 00:08:32.717 "block_size": 512, 00:08:32.717 "num_blocks": 65536, 00:08:32.717 "uuid": "e1926584-751e-40eb-8065-626e7c452bd4", 00:08:32.717 "assigned_rate_limits": { 00:08:32.717 "rw_ios_per_sec": 0, 00:08:32.717 "rw_mbytes_per_sec": 0, 00:08:32.717 "r_mbytes_per_sec": 0, 00:08:32.717 "w_mbytes_per_sec": 0 00:08:32.717 }, 00:08:32.717 "claimed": false, 00:08:32.717 "zoned": false, 00:08:32.717 "supported_io_types": { 00:08:32.717 "read": true, 00:08:32.717 "write": true, 00:08:32.717 "unmap": false, 00:08:32.717 "flush": false, 00:08:32.717 "reset": true, 00:08:32.717 "nvme_admin": false, 00:08:32.717 "nvme_io": false, 00:08:32.717 "nvme_io_md": false, 00:08:32.717 "write_zeroes": true, 00:08:32.717 "zcopy": false, 00:08:32.717 "get_zone_info": false, 00:08:32.717 "zone_management": false, 00:08:32.717 "zone_append": false, 00:08:32.717 "compare": false, 00:08:32.717 "compare_and_write": false, 00:08:32.717 "abort": false, 00:08:32.717 "seek_hole": false, 00:08:32.717 "seek_data": false, 00:08:32.717 "copy": false, 00:08:32.717 "nvme_iov_md": false 00:08:32.717 }, 00:08:32.717 "memory_domains": [ 00:08:32.717 { 00:08:32.717 "dma_device_id": "system", 00:08:32.717 "dma_device_type": 1 00:08:32.717 }, 00:08:32.717 { 00:08:32.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.717 "dma_device_type": 2 00:08:32.717 }, 00:08:32.717 { 00:08:32.717 "dma_device_id": "system", 00:08:32.717 "dma_device_type": 1 00:08:32.717 }, 00:08:32.717 { 00:08:32.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.717 "dma_device_type": 2 00:08:32.717 }, 00:08:32.718 { 00:08:32.718 "dma_device_id": "system", 00:08:32.718 "dma_device_type": 1 00:08:32.718 }, 00:08:32.718 { 00:08:32.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.718 "dma_device_type": 2 00:08:32.718 } 00:08:32.718 ], 00:08:32.718 "driver_specific": { 00:08:32.718 "raid": { 00:08:32.718 "uuid": "e1926584-751e-40eb-8065-626e7c452bd4", 00:08:32.718 "strip_size_kb": 0, 00:08:32.718 "state": "online", 00:08:32.718 "raid_level": "raid1", 00:08:32.718 "superblock": false, 00:08:32.718 "num_base_bdevs": 3, 00:08:32.718 "num_base_bdevs_discovered": 3, 00:08:32.718 "num_base_bdevs_operational": 3, 00:08:32.718 "base_bdevs_list": [ 00:08:32.718 { 00:08:32.718 "name": "NewBaseBdev", 00:08:32.718 "uuid": "c2c464e7-9992-4911-80bc-276da94648af", 00:08:32.718 "is_configured": true, 00:08:32.718 "data_offset": 0, 00:08:32.718 "data_size": 65536 00:08:32.718 }, 00:08:32.718 { 00:08:32.718 "name": "BaseBdev2", 00:08:32.718 "uuid": "412e1947-99c6-48a7-8208-4b72e700d96a", 00:08:32.718 "is_configured": true, 00:08:32.718 "data_offset": 0, 00:08:32.718 "data_size": 65536 00:08:32.718 }, 00:08:32.718 { 00:08:32.718 "name": "BaseBdev3", 00:08:32.718 "uuid": "e9ce8e24-06af-4b96-921f-faba8a364aad", 00:08:32.718 "is_configured": true, 00:08:32.718 "data_offset": 0, 00:08:32.718 "data_size": 65536 00:08:32.718 } 00:08:32.718 ] 00:08:32.718 } 00:08:32.718 } 00:08:32.718 }' 00:08:32.718 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:32.977 BaseBdev2 00:08:32.977 BaseBdev3' 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.977 [2024-11-02 23:48:26.981578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.977 [2024-11-02 23:48:26.981646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.977 [2024-11-02 23:48:26.981750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.977 [2024-11-02 23:48:26.982028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.977 [2024-11-02 23:48:26.982079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.977 23:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78266 00:08:32.978 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 78266 ']' 00:08:32.978 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 78266 00:08:32.978 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:32.978 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:32.978 23:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78266 00:08:32.978 23:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:32.978 23:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:32.978 23:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78266' 00:08:32.978 killing process with pid 78266 00:08:32.978 23:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 78266 00:08:32.978 [2024-11-02 23:48:27.032408] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.978 23:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 78266 00:08:32.978 [2024-11-02 23:48:27.062535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.237 ************************************ 00:08:33.238 END TEST raid_state_function_test 00:08:33.238 ************************************ 00:08:33.238 23:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:33.238 00:08:33.238 real 0m8.862s 00:08:33.238 user 0m15.228s 00:08:33.238 sys 0m1.743s 00:08:33.238 23:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.238 23:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.498 23:48:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:33.498 23:48:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:33.498 23:48:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.498 23:48:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.498 ************************************ 00:08:33.498 START TEST raid_state_function_test_sb 00:08:33.498 ************************************ 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78871 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78871' 00:08:33.498 Process raid pid: 78871 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78871 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78871 ']' 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.498 23:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.498 [2024-11-02 23:48:27.438872] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:33.498 [2024-11-02 23:48:27.439062] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.758 [2024-11-02 23:48:27.594321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.758 [2024-11-02 23:48:27.619633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.758 [2024-11-02 23:48:27.660901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.758 [2024-11-02 23:48:27.661012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.327 [2024-11-02 23:48:28.265536] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.327 [2024-11-02 23:48:28.265722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.327 [2024-11-02 23:48:28.265791] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.327 [2024-11-02 23:48:28.265819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.327 [2024-11-02 23:48:28.265866] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.327 [2024-11-02 23:48:28.265906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.327 "name": "Existed_Raid", 00:08:34.327 "uuid": "b410fc6a-796a-4bd7-8c03-39da34881674", 00:08:34.327 "strip_size_kb": 0, 00:08:34.327 "state": "configuring", 00:08:34.327 "raid_level": "raid1", 00:08:34.327 "superblock": true, 00:08:34.327 "num_base_bdevs": 3, 00:08:34.327 "num_base_bdevs_discovered": 0, 00:08:34.327 "num_base_bdevs_operational": 3, 00:08:34.327 "base_bdevs_list": [ 00:08:34.327 { 00:08:34.327 "name": "BaseBdev1", 00:08:34.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.327 "is_configured": false, 00:08:34.327 "data_offset": 0, 00:08:34.327 "data_size": 0 00:08:34.327 }, 00:08:34.327 { 00:08:34.327 "name": "BaseBdev2", 00:08:34.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.327 "is_configured": false, 00:08:34.327 "data_offset": 0, 00:08:34.327 "data_size": 0 00:08:34.327 }, 00:08:34.327 { 00:08:34.327 "name": "BaseBdev3", 00:08:34.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.327 "is_configured": false, 00:08:34.327 "data_offset": 0, 00:08:34.327 "data_size": 0 00:08:34.327 } 00:08:34.327 ] 00:08:34.327 }' 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.327 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.587 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.587 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.587 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.847 [2024-11-02 23:48:28.680752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.847 [2024-11-02 23:48:28.680805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.847 [2024-11-02 23:48:28.692769] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.847 [2024-11-02 23:48:28.692844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.847 [2024-11-02 23:48:28.692871] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.847 [2024-11-02 23:48:28.692894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.847 [2024-11-02 23:48:28.692911] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.847 [2024-11-02 23:48:28.692931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.847 BaseBdev1 00:08:34.847 [2024-11-02 23:48:28.713317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.847 [ 00:08:34.847 { 00:08:34.847 "name": "BaseBdev1", 00:08:34.847 "aliases": [ 00:08:34.847 "19009e0d-696b-4414-baa8-d3dc205e27af" 00:08:34.847 ], 00:08:34.847 "product_name": "Malloc disk", 00:08:34.847 "block_size": 512, 00:08:34.847 "num_blocks": 65536, 00:08:34.847 "uuid": "19009e0d-696b-4414-baa8-d3dc205e27af", 00:08:34.847 "assigned_rate_limits": { 00:08:34.847 "rw_ios_per_sec": 0, 00:08:34.847 "rw_mbytes_per_sec": 0, 00:08:34.847 "r_mbytes_per_sec": 0, 00:08:34.847 "w_mbytes_per_sec": 0 00:08:34.847 }, 00:08:34.847 "claimed": true, 00:08:34.847 "claim_type": "exclusive_write", 00:08:34.847 "zoned": false, 00:08:34.847 "supported_io_types": { 00:08:34.847 "read": true, 00:08:34.847 "write": true, 00:08:34.847 "unmap": true, 00:08:34.847 "flush": true, 00:08:34.847 "reset": true, 00:08:34.847 "nvme_admin": false, 00:08:34.847 "nvme_io": false, 00:08:34.847 "nvme_io_md": false, 00:08:34.847 "write_zeroes": true, 00:08:34.847 "zcopy": true, 00:08:34.847 "get_zone_info": false, 00:08:34.847 "zone_management": false, 00:08:34.847 "zone_append": false, 00:08:34.847 "compare": false, 00:08:34.847 "compare_and_write": false, 00:08:34.847 "abort": true, 00:08:34.847 "seek_hole": false, 00:08:34.847 "seek_data": false, 00:08:34.847 "copy": true, 00:08:34.847 "nvme_iov_md": false 00:08:34.847 }, 00:08:34.847 "memory_domains": [ 00:08:34.847 { 00:08:34.847 "dma_device_id": "system", 00:08:34.847 "dma_device_type": 1 00:08:34.847 }, 00:08:34.847 { 00:08:34.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.847 "dma_device_type": 2 00:08:34.847 } 00:08:34.847 ], 00:08:34.847 "driver_specific": {} 00:08:34.847 } 00:08:34.847 ] 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.847 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.848 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.848 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.848 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.848 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.848 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.848 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.848 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.848 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.848 "name": "Existed_Raid", 00:08:34.848 "uuid": "ef73d1cf-54ec-4129-823d-89f1d58d5f2e", 00:08:34.848 "strip_size_kb": 0, 00:08:34.848 "state": "configuring", 00:08:34.848 "raid_level": "raid1", 00:08:34.848 "superblock": true, 00:08:34.848 "num_base_bdevs": 3, 00:08:34.848 "num_base_bdevs_discovered": 1, 00:08:34.848 "num_base_bdevs_operational": 3, 00:08:34.848 "base_bdevs_list": [ 00:08:34.848 { 00:08:34.848 "name": "BaseBdev1", 00:08:34.848 "uuid": "19009e0d-696b-4414-baa8-d3dc205e27af", 00:08:34.848 "is_configured": true, 00:08:34.848 "data_offset": 2048, 00:08:34.848 "data_size": 63488 00:08:34.848 }, 00:08:34.848 { 00:08:34.848 "name": "BaseBdev2", 00:08:34.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.848 "is_configured": false, 00:08:34.848 "data_offset": 0, 00:08:34.848 "data_size": 0 00:08:34.848 }, 00:08:34.848 { 00:08:34.848 "name": "BaseBdev3", 00:08:34.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.848 "is_configured": false, 00:08:34.848 "data_offset": 0, 00:08:34.848 "data_size": 0 00:08:34.848 } 00:08:34.848 ] 00:08:34.848 }' 00:08:34.848 23:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.848 23:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.108 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.108 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.108 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.367 [2024-11-02 23:48:29.204541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.367 [2024-11-02 23:48:29.204651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.367 [2024-11-02 23:48:29.216541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.367 [2024-11-02 23:48:29.218398] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.367 [2024-11-02 23:48:29.218470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.367 [2024-11-02 23:48:29.218498] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.367 [2024-11-02 23:48:29.218522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.367 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.367 "name": "Existed_Raid", 00:08:35.367 "uuid": "4a2ed41d-0d4d-4434-9335-f02c8bf69a87", 00:08:35.367 "strip_size_kb": 0, 00:08:35.367 "state": "configuring", 00:08:35.367 "raid_level": "raid1", 00:08:35.367 "superblock": true, 00:08:35.367 "num_base_bdevs": 3, 00:08:35.367 "num_base_bdevs_discovered": 1, 00:08:35.367 "num_base_bdevs_operational": 3, 00:08:35.367 "base_bdevs_list": [ 00:08:35.367 { 00:08:35.367 "name": "BaseBdev1", 00:08:35.367 "uuid": "19009e0d-696b-4414-baa8-d3dc205e27af", 00:08:35.367 "is_configured": true, 00:08:35.367 "data_offset": 2048, 00:08:35.367 "data_size": 63488 00:08:35.367 }, 00:08:35.367 { 00:08:35.367 "name": "BaseBdev2", 00:08:35.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.367 "is_configured": false, 00:08:35.367 "data_offset": 0, 00:08:35.367 "data_size": 0 00:08:35.367 }, 00:08:35.367 { 00:08:35.367 "name": "BaseBdev3", 00:08:35.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.368 "is_configured": false, 00:08:35.368 "data_offset": 0, 00:08:35.368 "data_size": 0 00:08:35.368 } 00:08:35.368 ] 00:08:35.368 }' 00:08:35.368 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.368 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 [2024-11-02 23:48:29.594610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.628 BaseBdev2 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 [ 00:08:35.628 { 00:08:35.628 "name": "BaseBdev2", 00:08:35.628 "aliases": [ 00:08:35.628 "14670ffe-f3ba-419f-b523-f2aa8dbdc301" 00:08:35.628 ], 00:08:35.628 "product_name": "Malloc disk", 00:08:35.628 "block_size": 512, 00:08:35.628 "num_blocks": 65536, 00:08:35.628 "uuid": "14670ffe-f3ba-419f-b523-f2aa8dbdc301", 00:08:35.628 "assigned_rate_limits": { 00:08:35.628 "rw_ios_per_sec": 0, 00:08:35.628 "rw_mbytes_per_sec": 0, 00:08:35.628 "r_mbytes_per_sec": 0, 00:08:35.628 "w_mbytes_per_sec": 0 00:08:35.628 }, 00:08:35.628 "claimed": true, 00:08:35.628 "claim_type": "exclusive_write", 00:08:35.628 "zoned": false, 00:08:35.628 "supported_io_types": { 00:08:35.628 "read": true, 00:08:35.628 "write": true, 00:08:35.628 "unmap": true, 00:08:35.628 "flush": true, 00:08:35.628 "reset": true, 00:08:35.628 "nvme_admin": false, 00:08:35.628 "nvme_io": false, 00:08:35.628 "nvme_io_md": false, 00:08:35.628 "write_zeroes": true, 00:08:35.628 "zcopy": true, 00:08:35.628 "get_zone_info": false, 00:08:35.628 "zone_management": false, 00:08:35.628 "zone_append": false, 00:08:35.628 "compare": false, 00:08:35.628 "compare_and_write": false, 00:08:35.628 "abort": true, 00:08:35.628 "seek_hole": false, 00:08:35.628 "seek_data": false, 00:08:35.628 "copy": true, 00:08:35.628 "nvme_iov_md": false 00:08:35.628 }, 00:08:35.628 "memory_domains": [ 00:08:35.628 { 00:08:35.628 "dma_device_id": "system", 00:08:35.628 "dma_device_type": 1 00:08:35.628 }, 00:08:35.628 { 00:08:35.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.628 "dma_device_type": 2 00:08:35.628 } 00:08:35.628 ], 00:08:35.628 "driver_specific": {} 00:08:35.628 } 00:08:35.628 ] 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.628 "name": "Existed_Raid", 00:08:35.628 "uuid": "4a2ed41d-0d4d-4434-9335-f02c8bf69a87", 00:08:35.628 "strip_size_kb": 0, 00:08:35.628 "state": "configuring", 00:08:35.628 "raid_level": "raid1", 00:08:35.628 "superblock": true, 00:08:35.628 "num_base_bdevs": 3, 00:08:35.628 "num_base_bdevs_discovered": 2, 00:08:35.628 "num_base_bdevs_operational": 3, 00:08:35.628 "base_bdevs_list": [ 00:08:35.628 { 00:08:35.628 "name": "BaseBdev1", 00:08:35.628 "uuid": "19009e0d-696b-4414-baa8-d3dc205e27af", 00:08:35.628 "is_configured": true, 00:08:35.628 "data_offset": 2048, 00:08:35.628 "data_size": 63488 00:08:35.628 }, 00:08:35.628 { 00:08:35.628 "name": "BaseBdev2", 00:08:35.628 "uuid": "14670ffe-f3ba-419f-b523-f2aa8dbdc301", 00:08:35.628 "is_configured": true, 00:08:35.628 "data_offset": 2048, 00:08:35.628 "data_size": 63488 00:08:35.628 }, 00:08:35.628 { 00:08:35.628 "name": "BaseBdev3", 00:08:35.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.628 "is_configured": false, 00:08:35.628 "data_offset": 0, 00:08:35.628 "data_size": 0 00:08:35.628 } 00:08:35.628 ] 00:08:35.628 }' 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.628 23:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.196 [2024-11-02 23:48:30.110224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.196 [2024-11-02 23:48:30.110538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:36.196 [2024-11-02 23:48:30.110610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:36.196 [2024-11-02 23:48:30.110990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:36.196 BaseBdev3 00:08:36.196 [2024-11-02 23:48:30.111200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:36.196 [2024-11-02 23:48:30.111215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:36.196 [2024-11-02 23:48:30.111355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.196 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.196 [ 00:08:36.196 { 00:08:36.196 "name": "BaseBdev3", 00:08:36.196 "aliases": [ 00:08:36.196 "85913c14-f2e8-4aad-bd12-08e963f59ea1" 00:08:36.196 ], 00:08:36.196 "product_name": "Malloc disk", 00:08:36.196 "block_size": 512, 00:08:36.196 "num_blocks": 65536, 00:08:36.196 "uuid": "85913c14-f2e8-4aad-bd12-08e963f59ea1", 00:08:36.196 "assigned_rate_limits": { 00:08:36.196 "rw_ios_per_sec": 0, 00:08:36.196 "rw_mbytes_per_sec": 0, 00:08:36.196 "r_mbytes_per_sec": 0, 00:08:36.196 "w_mbytes_per_sec": 0 00:08:36.196 }, 00:08:36.196 "claimed": true, 00:08:36.196 "claim_type": "exclusive_write", 00:08:36.196 "zoned": false, 00:08:36.196 "supported_io_types": { 00:08:36.196 "read": true, 00:08:36.196 "write": true, 00:08:36.196 "unmap": true, 00:08:36.196 "flush": true, 00:08:36.196 "reset": true, 00:08:36.196 "nvme_admin": false, 00:08:36.196 "nvme_io": false, 00:08:36.196 "nvme_io_md": false, 00:08:36.196 "write_zeroes": true, 00:08:36.196 "zcopy": true, 00:08:36.196 "get_zone_info": false, 00:08:36.196 "zone_management": false, 00:08:36.196 "zone_append": false, 00:08:36.196 "compare": false, 00:08:36.196 "compare_and_write": false, 00:08:36.196 "abort": true, 00:08:36.196 "seek_hole": false, 00:08:36.196 "seek_data": false, 00:08:36.196 "copy": true, 00:08:36.196 "nvme_iov_md": false 00:08:36.196 }, 00:08:36.196 "memory_domains": [ 00:08:36.196 { 00:08:36.196 "dma_device_id": "system", 00:08:36.196 "dma_device_type": 1 00:08:36.196 }, 00:08:36.196 { 00:08:36.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.197 "dma_device_type": 2 00:08:36.197 } 00:08:36.197 ], 00:08:36.197 "driver_specific": {} 00:08:36.197 } 00:08:36.197 ] 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.197 "name": "Existed_Raid", 00:08:36.197 "uuid": "4a2ed41d-0d4d-4434-9335-f02c8bf69a87", 00:08:36.197 "strip_size_kb": 0, 00:08:36.197 "state": "online", 00:08:36.197 "raid_level": "raid1", 00:08:36.197 "superblock": true, 00:08:36.197 "num_base_bdevs": 3, 00:08:36.197 "num_base_bdevs_discovered": 3, 00:08:36.197 "num_base_bdevs_operational": 3, 00:08:36.197 "base_bdevs_list": [ 00:08:36.197 { 00:08:36.197 "name": "BaseBdev1", 00:08:36.197 "uuid": "19009e0d-696b-4414-baa8-d3dc205e27af", 00:08:36.197 "is_configured": true, 00:08:36.197 "data_offset": 2048, 00:08:36.197 "data_size": 63488 00:08:36.197 }, 00:08:36.197 { 00:08:36.197 "name": "BaseBdev2", 00:08:36.197 "uuid": "14670ffe-f3ba-419f-b523-f2aa8dbdc301", 00:08:36.197 "is_configured": true, 00:08:36.197 "data_offset": 2048, 00:08:36.197 "data_size": 63488 00:08:36.197 }, 00:08:36.197 { 00:08:36.197 "name": "BaseBdev3", 00:08:36.197 "uuid": "85913c14-f2e8-4aad-bd12-08e963f59ea1", 00:08:36.197 "is_configured": true, 00:08:36.197 "data_offset": 2048, 00:08:36.197 "data_size": 63488 00:08:36.197 } 00:08:36.197 ] 00:08:36.197 }' 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.197 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.766 [2024-11-02 23:48:30.605718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.766 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.766 "name": "Existed_Raid", 00:08:36.766 "aliases": [ 00:08:36.766 "4a2ed41d-0d4d-4434-9335-f02c8bf69a87" 00:08:36.766 ], 00:08:36.766 "product_name": "Raid Volume", 00:08:36.766 "block_size": 512, 00:08:36.766 "num_blocks": 63488, 00:08:36.766 "uuid": "4a2ed41d-0d4d-4434-9335-f02c8bf69a87", 00:08:36.766 "assigned_rate_limits": { 00:08:36.766 "rw_ios_per_sec": 0, 00:08:36.766 "rw_mbytes_per_sec": 0, 00:08:36.766 "r_mbytes_per_sec": 0, 00:08:36.766 "w_mbytes_per_sec": 0 00:08:36.766 }, 00:08:36.766 "claimed": false, 00:08:36.766 "zoned": false, 00:08:36.766 "supported_io_types": { 00:08:36.766 "read": true, 00:08:36.766 "write": true, 00:08:36.766 "unmap": false, 00:08:36.766 "flush": false, 00:08:36.766 "reset": true, 00:08:36.766 "nvme_admin": false, 00:08:36.766 "nvme_io": false, 00:08:36.766 "nvme_io_md": false, 00:08:36.766 "write_zeroes": true, 00:08:36.766 "zcopy": false, 00:08:36.766 "get_zone_info": false, 00:08:36.766 "zone_management": false, 00:08:36.766 "zone_append": false, 00:08:36.766 "compare": false, 00:08:36.766 "compare_and_write": false, 00:08:36.766 "abort": false, 00:08:36.766 "seek_hole": false, 00:08:36.766 "seek_data": false, 00:08:36.766 "copy": false, 00:08:36.766 "nvme_iov_md": false 00:08:36.766 }, 00:08:36.766 "memory_domains": [ 00:08:36.766 { 00:08:36.766 "dma_device_id": "system", 00:08:36.766 "dma_device_type": 1 00:08:36.766 }, 00:08:36.766 { 00:08:36.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.766 "dma_device_type": 2 00:08:36.766 }, 00:08:36.766 { 00:08:36.766 "dma_device_id": "system", 00:08:36.766 "dma_device_type": 1 00:08:36.766 }, 00:08:36.766 { 00:08:36.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.766 "dma_device_type": 2 00:08:36.766 }, 00:08:36.767 { 00:08:36.767 "dma_device_id": "system", 00:08:36.767 "dma_device_type": 1 00:08:36.767 }, 00:08:36.767 { 00:08:36.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.767 "dma_device_type": 2 00:08:36.767 } 00:08:36.767 ], 00:08:36.767 "driver_specific": { 00:08:36.767 "raid": { 00:08:36.767 "uuid": "4a2ed41d-0d4d-4434-9335-f02c8bf69a87", 00:08:36.767 "strip_size_kb": 0, 00:08:36.767 "state": "online", 00:08:36.767 "raid_level": "raid1", 00:08:36.767 "superblock": true, 00:08:36.767 "num_base_bdevs": 3, 00:08:36.767 "num_base_bdevs_discovered": 3, 00:08:36.767 "num_base_bdevs_operational": 3, 00:08:36.767 "base_bdevs_list": [ 00:08:36.767 { 00:08:36.767 "name": "BaseBdev1", 00:08:36.767 "uuid": "19009e0d-696b-4414-baa8-d3dc205e27af", 00:08:36.767 "is_configured": true, 00:08:36.767 "data_offset": 2048, 00:08:36.767 "data_size": 63488 00:08:36.767 }, 00:08:36.767 { 00:08:36.767 "name": "BaseBdev2", 00:08:36.767 "uuid": "14670ffe-f3ba-419f-b523-f2aa8dbdc301", 00:08:36.767 "is_configured": true, 00:08:36.767 "data_offset": 2048, 00:08:36.767 "data_size": 63488 00:08:36.767 }, 00:08:36.767 { 00:08:36.767 "name": "BaseBdev3", 00:08:36.767 "uuid": "85913c14-f2e8-4aad-bd12-08e963f59ea1", 00:08:36.767 "is_configured": true, 00:08:36.767 "data_offset": 2048, 00:08:36.767 "data_size": 63488 00:08:36.767 } 00:08:36.767 ] 00:08:36.767 } 00:08:36.767 } 00:08:36.767 }' 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:36.767 BaseBdev2 00:08:36.767 BaseBdev3' 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.767 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.026 [2024-11-02 23:48:30.884983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.026 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.027 "name": "Existed_Raid", 00:08:37.027 "uuid": "4a2ed41d-0d4d-4434-9335-f02c8bf69a87", 00:08:37.027 "strip_size_kb": 0, 00:08:37.027 "state": "online", 00:08:37.027 "raid_level": "raid1", 00:08:37.027 "superblock": true, 00:08:37.027 "num_base_bdevs": 3, 00:08:37.027 "num_base_bdevs_discovered": 2, 00:08:37.027 "num_base_bdevs_operational": 2, 00:08:37.027 "base_bdevs_list": [ 00:08:37.027 { 00:08:37.027 "name": null, 00:08:37.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.027 "is_configured": false, 00:08:37.027 "data_offset": 0, 00:08:37.027 "data_size": 63488 00:08:37.027 }, 00:08:37.027 { 00:08:37.027 "name": "BaseBdev2", 00:08:37.027 "uuid": "14670ffe-f3ba-419f-b523-f2aa8dbdc301", 00:08:37.027 "is_configured": true, 00:08:37.027 "data_offset": 2048, 00:08:37.027 "data_size": 63488 00:08:37.027 }, 00:08:37.027 { 00:08:37.027 "name": "BaseBdev3", 00:08:37.027 "uuid": "85913c14-f2e8-4aad-bd12-08e963f59ea1", 00:08:37.027 "is_configured": true, 00:08:37.027 "data_offset": 2048, 00:08:37.027 "data_size": 63488 00:08:37.027 } 00:08:37.027 ] 00:08:37.027 }' 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.027 23:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.285 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:37.285 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.285 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.285 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.285 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.285 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.285 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.545 [2024-11-02 23:48:31.395953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.545 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.546 [2024-11-02 23:48:31.477013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:37.546 [2024-11-02 23:48:31.477240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.546 [2024-11-02 23:48:31.498443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.546 [2024-11-02 23:48:31.498594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.546 [2024-11-02 23:48:31.498652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.546 BaseBdev2 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.546 [ 00:08:37.546 { 00:08:37.546 "name": "BaseBdev2", 00:08:37.546 "aliases": [ 00:08:37.546 "f807e85d-b895-491f-87b2-92c7fd11fb35" 00:08:37.546 ], 00:08:37.546 "product_name": "Malloc disk", 00:08:37.546 "block_size": 512, 00:08:37.546 "num_blocks": 65536, 00:08:37.546 "uuid": "f807e85d-b895-491f-87b2-92c7fd11fb35", 00:08:37.546 "assigned_rate_limits": { 00:08:37.546 "rw_ios_per_sec": 0, 00:08:37.546 "rw_mbytes_per_sec": 0, 00:08:37.546 "r_mbytes_per_sec": 0, 00:08:37.546 "w_mbytes_per_sec": 0 00:08:37.546 }, 00:08:37.546 "claimed": false, 00:08:37.546 "zoned": false, 00:08:37.546 "supported_io_types": { 00:08:37.546 "read": true, 00:08:37.546 "write": true, 00:08:37.546 "unmap": true, 00:08:37.546 "flush": true, 00:08:37.546 "reset": true, 00:08:37.546 "nvme_admin": false, 00:08:37.546 "nvme_io": false, 00:08:37.546 "nvme_io_md": false, 00:08:37.546 "write_zeroes": true, 00:08:37.546 "zcopy": true, 00:08:37.546 "get_zone_info": false, 00:08:37.546 "zone_management": false, 00:08:37.546 "zone_append": false, 00:08:37.546 "compare": false, 00:08:37.546 "compare_and_write": false, 00:08:37.546 "abort": true, 00:08:37.546 "seek_hole": false, 00:08:37.546 "seek_data": false, 00:08:37.546 "copy": true, 00:08:37.546 "nvme_iov_md": false 00:08:37.546 }, 00:08:37.546 "memory_domains": [ 00:08:37.546 { 00:08:37.546 "dma_device_id": "system", 00:08:37.546 "dma_device_type": 1 00:08:37.546 }, 00:08:37.546 { 00:08:37.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.546 "dma_device_type": 2 00:08:37.546 } 00:08:37.546 ], 00:08:37.546 "driver_specific": {} 00:08:37.546 } 00:08:37.546 ] 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.546 BaseBdev3 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:37.546 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.812 [ 00:08:37.812 { 00:08:37.812 "name": "BaseBdev3", 00:08:37.812 "aliases": [ 00:08:37.812 "7034ff0b-4a04-46af-865f-6af655318a76" 00:08:37.812 ], 00:08:37.812 "product_name": "Malloc disk", 00:08:37.812 "block_size": 512, 00:08:37.812 "num_blocks": 65536, 00:08:37.812 "uuid": "7034ff0b-4a04-46af-865f-6af655318a76", 00:08:37.812 "assigned_rate_limits": { 00:08:37.812 "rw_ios_per_sec": 0, 00:08:37.812 "rw_mbytes_per_sec": 0, 00:08:37.812 "r_mbytes_per_sec": 0, 00:08:37.812 "w_mbytes_per_sec": 0 00:08:37.812 }, 00:08:37.812 "claimed": false, 00:08:37.812 "zoned": false, 00:08:37.812 "supported_io_types": { 00:08:37.812 "read": true, 00:08:37.812 "write": true, 00:08:37.812 "unmap": true, 00:08:37.812 "flush": true, 00:08:37.812 "reset": true, 00:08:37.812 "nvme_admin": false, 00:08:37.812 "nvme_io": false, 00:08:37.812 "nvme_io_md": false, 00:08:37.812 "write_zeroes": true, 00:08:37.812 "zcopy": true, 00:08:37.812 "get_zone_info": false, 00:08:37.812 "zone_management": false, 00:08:37.812 "zone_append": false, 00:08:37.812 "compare": false, 00:08:37.812 "compare_and_write": false, 00:08:37.812 "abort": true, 00:08:37.812 "seek_hole": false, 00:08:37.812 "seek_data": false, 00:08:37.812 "copy": true, 00:08:37.812 "nvme_iov_md": false 00:08:37.812 }, 00:08:37.812 "memory_domains": [ 00:08:37.812 { 00:08:37.812 "dma_device_id": "system", 00:08:37.812 "dma_device_type": 1 00:08:37.812 }, 00:08:37.812 { 00:08:37.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.812 "dma_device_type": 2 00:08:37.812 } 00:08:37.812 ], 00:08:37.812 "driver_specific": {} 00:08:37.812 } 00:08:37.812 ] 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.812 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.812 [2024-11-02 23:48:31.675930] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.812 [2024-11-02 23:48:31.676085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.812 [2024-11-02 23:48:31.676132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.813 [2024-11-02 23:48:31.678392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.813 "name": "Existed_Raid", 00:08:37.813 "uuid": "34ff9335-f87f-4fa0-a368-f1c15a61dd58", 00:08:37.813 "strip_size_kb": 0, 00:08:37.813 "state": "configuring", 00:08:37.813 "raid_level": "raid1", 00:08:37.813 "superblock": true, 00:08:37.813 "num_base_bdevs": 3, 00:08:37.813 "num_base_bdevs_discovered": 2, 00:08:37.813 "num_base_bdevs_operational": 3, 00:08:37.813 "base_bdevs_list": [ 00:08:37.813 { 00:08:37.813 "name": "BaseBdev1", 00:08:37.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.813 "is_configured": false, 00:08:37.813 "data_offset": 0, 00:08:37.813 "data_size": 0 00:08:37.813 }, 00:08:37.813 { 00:08:37.813 "name": "BaseBdev2", 00:08:37.813 "uuid": "f807e85d-b895-491f-87b2-92c7fd11fb35", 00:08:37.813 "is_configured": true, 00:08:37.813 "data_offset": 2048, 00:08:37.813 "data_size": 63488 00:08:37.813 }, 00:08:37.813 { 00:08:37.813 "name": "BaseBdev3", 00:08:37.813 "uuid": "7034ff0b-4a04-46af-865f-6af655318a76", 00:08:37.813 "is_configured": true, 00:08:37.813 "data_offset": 2048, 00:08:37.813 "data_size": 63488 00:08:37.813 } 00:08:37.813 ] 00:08:37.813 }' 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.813 23:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.072 [2024-11-02 23:48:32.155191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.072 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.330 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.330 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.330 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.330 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.330 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.330 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.330 "name": "Existed_Raid", 00:08:38.330 "uuid": "34ff9335-f87f-4fa0-a368-f1c15a61dd58", 00:08:38.330 "strip_size_kb": 0, 00:08:38.330 "state": "configuring", 00:08:38.330 "raid_level": "raid1", 00:08:38.330 "superblock": true, 00:08:38.330 "num_base_bdevs": 3, 00:08:38.330 "num_base_bdevs_discovered": 1, 00:08:38.330 "num_base_bdevs_operational": 3, 00:08:38.330 "base_bdevs_list": [ 00:08:38.330 { 00:08:38.330 "name": "BaseBdev1", 00:08:38.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.330 "is_configured": false, 00:08:38.330 "data_offset": 0, 00:08:38.330 "data_size": 0 00:08:38.330 }, 00:08:38.330 { 00:08:38.330 "name": null, 00:08:38.330 "uuid": "f807e85d-b895-491f-87b2-92c7fd11fb35", 00:08:38.330 "is_configured": false, 00:08:38.330 "data_offset": 0, 00:08:38.330 "data_size": 63488 00:08:38.330 }, 00:08:38.330 { 00:08:38.330 "name": "BaseBdev3", 00:08:38.330 "uuid": "7034ff0b-4a04-46af-865f-6af655318a76", 00:08:38.330 "is_configured": true, 00:08:38.330 "data_offset": 2048, 00:08:38.330 "data_size": 63488 00:08:38.330 } 00:08:38.330 ] 00:08:38.330 }' 00:08:38.330 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.330 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.589 [2024-11-02 23:48:32.655477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.589 BaseBdev1 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.589 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.849 [ 00:08:38.849 { 00:08:38.849 "name": "BaseBdev1", 00:08:38.849 "aliases": [ 00:08:38.849 "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055" 00:08:38.849 ], 00:08:38.849 "product_name": "Malloc disk", 00:08:38.849 "block_size": 512, 00:08:38.849 "num_blocks": 65536, 00:08:38.849 "uuid": "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055", 00:08:38.849 "assigned_rate_limits": { 00:08:38.849 "rw_ios_per_sec": 0, 00:08:38.849 "rw_mbytes_per_sec": 0, 00:08:38.849 "r_mbytes_per_sec": 0, 00:08:38.849 "w_mbytes_per_sec": 0 00:08:38.849 }, 00:08:38.849 "claimed": true, 00:08:38.849 "claim_type": "exclusive_write", 00:08:38.849 "zoned": false, 00:08:38.849 "supported_io_types": { 00:08:38.849 "read": true, 00:08:38.849 "write": true, 00:08:38.849 "unmap": true, 00:08:38.849 "flush": true, 00:08:38.849 "reset": true, 00:08:38.849 "nvme_admin": false, 00:08:38.849 "nvme_io": false, 00:08:38.849 "nvme_io_md": false, 00:08:38.849 "write_zeroes": true, 00:08:38.849 "zcopy": true, 00:08:38.849 "get_zone_info": false, 00:08:38.849 "zone_management": false, 00:08:38.849 "zone_append": false, 00:08:38.849 "compare": false, 00:08:38.849 "compare_and_write": false, 00:08:38.849 "abort": true, 00:08:38.849 "seek_hole": false, 00:08:38.849 "seek_data": false, 00:08:38.849 "copy": true, 00:08:38.849 "nvme_iov_md": false 00:08:38.849 }, 00:08:38.849 "memory_domains": [ 00:08:38.849 { 00:08:38.849 "dma_device_id": "system", 00:08:38.849 "dma_device_type": 1 00:08:38.849 }, 00:08:38.849 { 00:08:38.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.849 "dma_device_type": 2 00:08:38.849 } 00:08:38.849 ], 00:08:38.849 "driver_specific": {} 00:08:38.849 } 00:08:38.849 ] 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.849 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.849 "name": "Existed_Raid", 00:08:38.849 "uuid": "34ff9335-f87f-4fa0-a368-f1c15a61dd58", 00:08:38.849 "strip_size_kb": 0, 00:08:38.849 "state": "configuring", 00:08:38.849 "raid_level": "raid1", 00:08:38.849 "superblock": true, 00:08:38.849 "num_base_bdevs": 3, 00:08:38.849 "num_base_bdevs_discovered": 2, 00:08:38.849 "num_base_bdevs_operational": 3, 00:08:38.849 "base_bdevs_list": [ 00:08:38.850 { 00:08:38.850 "name": "BaseBdev1", 00:08:38.850 "uuid": "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055", 00:08:38.850 "is_configured": true, 00:08:38.850 "data_offset": 2048, 00:08:38.850 "data_size": 63488 00:08:38.850 }, 00:08:38.850 { 00:08:38.850 "name": null, 00:08:38.850 "uuid": "f807e85d-b895-491f-87b2-92c7fd11fb35", 00:08:38.850 "is_configured": false, 00:08:38.850 "data_offset": 0, 00:08:38.850 "data_size": 63488 00:08:38.850 }, 00:08:38.850 { 00:08:38.850 "name": "BaseBdev3", 00:08:38.850 "uuid": "7034ff0b-4a04-46af-865f-6af655318a76", 00:08:38.850 "is_configured": true, 00:08:38.850 "data_offset": 2048, 00:08:38.850 "data_size": 63488 00:08:38.850 } 00:08:38.850 ] 00:08:38.850 }' 00:08:38.850 23:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.850 23:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.109 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.109 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.109 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.109 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.109 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.368 [2024-11-02 23:48:33.226651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.368 "name": "Existed_Raid", 00:08:39.368 "uuid": "34ff9335-f87f-4fa0-a368-f1c15a61dd58", 00:08:39.368 "strip_size_kb": 0, 00:08:39.368 "state": "configuring", 00:08:39.368 "raid_level": "raid1", 00:08:39.368 "superblock": true, 00:08:39.368 "num_base_bdevs": 3, 00:08:39.368 "num_base_bdevs_discovered": 1, 00:08:39.368 "num_base_bdevs_operational": 3, 00:08:39.368 "base_bdevs_list": [ 00:08:39.368 { 00:08:39.368 "name": "BaseBdev1", 00:08:39.368 "uuid": "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055", 00:08:39.368 "is_configured": true, 00:08:39.368 "data_offset": 2048, 00:08:39.368 "data_size": 63488 00:08:39.368 }, 00:08:39.368 { 00:08:39.368 "name": null, 00:08:39.368 "uuid": "f807e85d-b895-491f-87b2-92c7fd11fb35", 00:08:39.368 "is_configured": false, 00:08:39.368 "data_offset": 0, 00:08:39.368 "data_size": 63488 00:08:39.368 }, 00:08:39.368 { 00:08:39.368 "name": null, 00:08:39.368 "uuid": "7034ff0b-4a04-46af-865f-6af655318a76", 00:08:39.368 "is_configured": false, 00:08:39.368 "data_offset": 0, 00:08:39.368 "data_size": 63488 00:08:39.368 } 00:08:39.368 ] 00:08:39.368 }' 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.368 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.628 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:39.628 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.628 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.628 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.628 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.895 [2024-11-02 23:48:33.737815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.895 "name": "Existed_Raid", 00:08:39.895 "uuid": "34ff9335-f87f-4fa0-a368-f1c15a61dd58", 00:08:39.895 "strip_size_kb": 0, 00:08:39.895 "state": "configuring", 00:08:39.895 "raid_level": "raid1", 00:08:39.895 "superblock": true, 00:08:39.895 "num_base_bdevs": 3, 00:08:39.895 "num_base_bdevs_discovered": 2, 00:08:39.895 "num_base_bdevs_operational": 3, 00:08:39.895 "base_bdevs_list": [ 00:08:39.895 { 00:08:39.895 "name": "BaseBdev1", 00:08:39.895 "uuid": "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055", 00:08:39.895 "is_configured": true, 00:08:39.895 "data_offset": 2048, 00:08:39.895 "data_size": 63488 00:08:39.895 }, 00:08:39.895 { 00:08:39.895 "name": null, 00:08:39.895 "uuid": "f807e85d-b895-491f-87b2-92c7fd11fb35", 00:08:39.895 "is_configured": false, 00:08:39.895 "data_offset": 0, 00:08:39.895 "data_size": 63488 00:08:39.895 }, 00:08:39.895 { 00:08:39.895 "name": "BaseBdev3", 00:08:39.895 "uuid": "7034ff0b-4a04-46af-865f-6af655318a76", 00:08:39.895 "is_configured": true, 00:08:39.895 "data_offset": 2048, 00:08:39.895 "data_size": 63488 00:08:39.895 } 00:08:39.895 ] 00:08:39.895 }' 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.895 23:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.155 [2024-11-02 23:48:34.169058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.155 "name": "Existed_Raid", 00:08:40.155 "uuid": "34ff9335-f87f-4fa0-a368-f1c15a61dd58", 00:08:40.155 "strip_size_kb": 0, 00:08:40.155 "state": "configuring", 00:08:40.155 "raid_level": "raid1", 00:08:40.155 "superblock": true, 00:08:40.155 "num_base_bdevs": 3, 00:08:40.155 "num_base_bdevs_discovered": 1, 00:08:40.155 "num_base_bdevs_operational": 3, 00:08:40.155 "base_bdevs_list": [ 00:08:40.155 { 00:08:40.155 "name": null, 00:08:40.155 "uuid": "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055", 00:08:40.155 "is_configured": false, 00:08:40.155 "data_offset": 0, 00:08:40.155 "data_size": 63488 00:08:40.155 }, 00:08:40.155 { 00:08:40.155 "name": null, 00:08:40.155 "uuid": "f807e85d-b895-491f-87b2-92c7fd11fb35", 00:08:40.155 "is_configured": false, 00:08:40.155 "data_offset": 0, 00:08:40.155 "data_size": 63488 00:08:40.155 }, 00:08:40.155 { 00:08:40.155 "name": "BaseBdev3", 00:08:40.155 "uuid": "7034ff0b-4a04-46af-865f-6af655318a76", 00:08:40.155 "is_configured": true, 00:08:40.155 "data_offset": 2048, 00:08:40.155 "data_size": 63488 00:08:40.155 } 00:08:40.155 ] 00:08:40.155 }' 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.155 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.724 [2024-11-02 23:48:34.620407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.724 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.724 "name": "Existed_Raid", 00:08:40.724 "uuid": "34ff9335-f87f-4fa0-a368-f1c15a61dd58", 00:08:40.724 "strip_size_kb": 0, 00:08:40.724 "state": "configuring", 00:08:40.724 "raid_level": "raid1", 00:08:40.724 "superblock": true, 00:08:40.724 "num_base_bdevs": 3, 00:08:40.724 "num_base_bdevs_discovered": 2, 00:08:40.724 "num_base_bdevs_operational": 3, 00:08:40.724 "base_bdevs_list": [ 00:08:40.724 { 00:08:40.724 "name": null, 00:08:40.724 "uuid": "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055", 00:08:40.724 "is_configured": false, 00:08:40.724 "data_offset": 0, 00:08:40.724 "data_size": 63488 00:08:40.724 }, 00:08:40.724 { 00:08:40.724 "name": "BaseBdev2", 00:08:40.724 "uuid": "f807e85d-b895-491f-87b2-92c7fd11fb35", 00:08:40.724 "is_configured": true, 00:08:40.724 "data_offset": 2048, 00:08:40.724 "data_size": 63488 00:08:40.724 }, 00:08:40.724 { 00:08:40.724 "name": "BaseBdev3", 00:08:40.724 "uuid": "7034ff0b-4a04-46af-865f-6af655318a76", 00:08:40.724 "is_configured": true, 00:08:40.724 "data_offset": 2048, 00:08:40.724 "data_size": 63488 00:08:40.724 } 00:08:40.724 ] 00:08:40.725 }' 00:08:40.725 23:48:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.725 23:48:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.984 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.984 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.984 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.984 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:40.984 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d0d4cf0e-d9a4-4d14-9ccc-b6e571949055 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.245 [2024-11-02 23:48:35.140589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:41.245 [2024-11-02 23:48:35.140936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:41.245 [2024-11-02 23:48:35.140997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:41.245 NewBaseBdev 00:08:41.245 [2024-11-02 23:48:35.141324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:41.245 [2024-11-02 23:48:35.141459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:41.245 [2024-11-02 23:48:35.141476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:41.245 [2024-11-02 23:48:35.141596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.245 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.245 [ 00:08:41.245 { 00:08:41.245 "name": "NewBaseBdev", 00:08:41.245 "aliases": [ 00:08:41.245 "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055" 00:08:41.245 ], 00:08:41.245 "product_name": "Malloc disk", 00:08:41.245 "block_size": 512, 00:08:41.245 "num_blocks": 65536, 00:08:41.245 "uuid": "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055", 00:08:41.245 "assigned_rate_limits": { 00:08:41.245 "rw_ios_per_sec": 0, 00:08:41.245 "rw_mbytes_per_sec": 0, 00:08:41.245 "r_mbytes_per_sec": 0, 00:08:41.245 "w_mbytes_per_sec": 0 00:08:41.245 }, 00:08:41.245 "claimed": true, 00:08:41.245 "claim_type": "exclusive_write", 00:08:41.245 "zoned": false, 00:08:41.245 "supported_io_types": { 00:08:41.245 "read": true, 00:08:41.245 "write": true, 00:08:41.245 "unmap": true, 00:08:41.245 "flush": true, 00:08:41.245 "reset": true, 00:08:41.245 "nvme_admin": false, 00:08:41.245 "nvme_io": false, 00:08:41.245 "nvme_io_md": false, 00:08:41.245 "write_zeroes": true, 00:08:41.245 "zcopy": true, 00:08:41.245 "get_zone_info": false, 00:08:41.245 "zone_management": false, 00:08:41.245 "zone_append": false, 00:08:41.245 "compare": false, 00:08:41.245 "compare_and_write": false, 00:08:41.245 "abort": true, 00:08:41.245 "seek_hole": false, 00:08:41.245 "seek_data": false, 00:08:41.245 "copy": true, 00:08:41.245 "nvme_iov_md": false 00:08:41.245 }, 00:08:41.245 "memory_domains": [ 00:08:41.245 { 00:08:41.245 "dma_device_id": "system", 00:08:41.245 "dma_device_type": 1 00:08:41.245 }, 00:08:41.245 { 00:08:41.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.245 "dma_device_type": 2 00:08:41.245 } 00:08:41.246 ], 00:08:41.246 "driver_specific": {} 00:08:41.246 } 00:08:41.246 ] 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.246 "name": "Existed_Raid", 00:08:41.246 "uuid": "34ff9335-f87f-4fa0-a368-f1c15a61dd58", 00:08:41.246 "strip_size_kb": 0, 00:08:41.246 "state": "online", 00:08:41.246 "raid_level": "raid1", 00:08:41.246 "superblock": true, 00:08:41.246 "num_base_bdevs": 3, 00:08:41.246 "num_base_bdevs_discovered": 3, 00:08:41.246 "num_base_bdevs_operational": 3, 00:08:41.246 "base_bdevs_list": [ 00:08:41.246 { 00:08:41.246 "name": "NewBaseBdev", 00:08:41.246 "uuid": "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055", 00:08:41.246 "is_configured": true, 00:08:41.246 "data_offset": 2048, 00:08:41.246 "data_size": 63488 00:08:41.246 }, 00:08:41.246 { 00:08:41.246 "name": "BaseBdev2", 00:08:41.246 "uuid": "f807e85d-b895-491f-87b2-92c7fd11fb35", 00:08:41.246 "is_configured": true, 00:08:41.246 "data_offset": 2048, 00:08:41.246 "data_size": 63488 00:08:41.246 }, 00:08:41.246 { 00:08:41.246 "name": "BaseBdev3", 00:08:41.246 "uuid": "7034ff0b-4a04-46af-865f-6af655318a76", 00:08:41.246 "is_configured": true, 00:08:41.246 "data_offset": 2048, 00:08:41.246 "data_size": 63488 00:08:41.246 } 00:08:41.246 ] 00:08:41.246 }' 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.246 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.506 [2024-11-02 23:48:35.564336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.506 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.766 "name": "Existed_Raid", 00:08:41.766 "aliases": [ 00:08:41.766 "34ff9335-f87f-4fa0-a368-f1c15a61dd58" 00:08:41.766 ], 00:08:41.766 "product_name": "Raid Volume", 00:08:41.766 "block_size": 512, 00:08:41.766 "num_blocks": 63488, 00:08:41.766 "uuid": "34ff9335-f87f-4fa0-a368-f1c15a61dd58", 00:08:41.766 "assigned_rate_limits": { 00:08:41.766 "rw_ios_per_sec": 0, 00:08:41.766 "rw_mbytes_per_sec": 0, 00:08:41.766 "r_mbytes_per_sec": 0, 00:08:41.766 "w_mbytes_per_sec": 0 00:08:41.766 }, 00:08:41.766 "claimed": false, 00:08:41.766 "zoned": false, 00:08:41.766 "supported_io_types": { 00:08:41.766 "read": true, 00:08:41.766 "write": true, 00:08:41.766 "unmap": false, 00:08:41.766 "flush": false, 00:08:41.766 "reset": true, 00:08:41.766 "nvme_admin": false, 00:08:41.766 "nvme_io": false, 00:08:41.766 "nvme_io_md": false, 00:08:41.766 "write_zeroes": true, 00:08:41.766 "zcopy": false, 00:08:41.766 "get_zone_info": false, 00:08:41.766 "zone_management": false, 00:08:41.766 "zone_append": false, 00:08:41.766 "compare": false, 00:08:41.766 "compare_and_write": false, 00:08:41.766 "abort": false, 00:08:41.766 "seek_hole": false, 00:08:41.766 "seek_data": false, 00:08:41.766 "copy": false, 00:08:41.766 "nvme_iov_md": false 00:08:41.766 }, 00:08:41.766 "memory_domains": [ 00:08:41.766 { 00:08:41.766 "dma_device_id": "system", 00:08:41.766 "dma_device_type": 1 00:08:41.766 }, 00:08:41.766 { 00:08:41.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.766 "dma_device_type": 2 00:08:41.766 }, 00:08:41.766 { 00:08:41.766 "dma_device_id": "system", 00:08:41.766 "dma_device_type": 1 00:08:41.766 }, 00:08:41.766 { 00:08:41.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.766 "dma_device_type": 2 00:08:41.766 }, 00:08:41.766 { 00:08:41.766 "dma_device_id": "system", 00:08:41.766 "dma_device_type": 1 00:08:41.766 }, 00:08:41.766 { 00:08:41.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.766 "dma_device_type": 2 00:08:41.766 } 00:08:41.766 ], 00:08:41.766 "driver_specific": { 00:08:41.766 "raid": { 00:08:41.766 "uuid": "34ff9335-f87f-4fa0-a368-f1c15a61dd58", 00:08:41.766 "strip_size_kb": 0, 00:08:41.766 "state": "online", 00:08:41.766 "raid_level": "raid1", 00:08:41.766 "superblock": true, 00:08:41.766 "num_base_bdevs": 3, 00:08:41.766 "num_base_bdevs_discovered": 3, 00:08:41.766 "num_base_bdevs_operational": 3, 00:08:41.766 "base_bdevs_list": [ 00:08:41.766 { 00:08:41.766 "name": "NewBaseBdev", 00:08:41.766 "uuid": "d0d4cf0e-d9a4-4d14-9ccc-b6e571949055", 00:08:41.766 "is_configured": true, 00:08:41.766 "data_offset": 2048, 00:08:41.766 "data_size": 63488 00:08:41.766 }, 00:08:41.766 { 00:08:41.766 "name": "BaseBdev2", 00:08:41.766 "uuid": "f807e85d-b895-491f-87b2-92c7fd11fb35", 00:08:41.766 "is_configured": true, 00:08:41.766 "data_offset": 2048, 00:08:41.766 "data_size": 63488 00:08:41.766 }, 00:08:41.766 { 00:08:41.766 "name": "BaseBdev3", 00:08:41.766 "uuid": "7034ff0b-4a04-46af-865f-6af655318a76", 00:08:41.766 "is_configured": true, 00:08:41.766 "data_offset": 2048, 00:08:41.766 "data_size": 63488 00:08:41.766 } 00:08:41.766 ] 00:08:41.766 } 00:08:41.766 } 00:08:41.766 }' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:41.766 BaseBdev2 00:08:41.766 BaseBdev3' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.766 [2024-11-02 23:48:35.839460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.766 [2024-11-02 23:48:35.839601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.766 [2024-11-02 23:48:35.839728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.766 [2024-11-02 23:48:35.840059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.766 [2024-11-02 23:48:35.840127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78871 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78871 ']' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 78871 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:41.766 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78871 00:08:42.031 killing process with pid 78871 00:08:42.031 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:42.031 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:42.031 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78871' 00:08:42.031 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 78871 00:08:42.031 [2024-11-02 23:48:35.887700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.031 23:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 78871 00:08:42.031 [2024-11-02 23:48:35.948825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.293 23:48:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:42.293 00:08:42.293 real 0m8.930s 00:08:42.293 user 0m15.059s 00:08:42.293 sys 0m1.838s 00:08:42.293 23:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.293 ************************************ 00:08:42.293 END TEST raid_state_function_test_sb 00:08:42.293 ************************************ 00:08:42.293 23:48:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.293 23:48:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:08:42.293 23:48:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:42.293 23:48:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.293 23:48:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.293 ************************************ 00:08:42.293 START TEST raid_superblock_test 00:08:42.293 ************************************ 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79479 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79479 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 79479 ']' 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:42.293 23:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.557 [2024-11-02 23:48:36.444821] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:42.557 [2024-11-02 23:48:36.444958] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79479 ] 00:08:42.557 [2024-11-02 23:48:36.602152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.557 [2024-11-02 23:48:36.645143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.823 [2024-11-02 23:48:36.723105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.823 [2024-11-02 23:48:36.723270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.394 malloc1 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.394 [2024-11-02 23:48:37.286968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.394 [2024-11-02 23:48:37.287137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.394 [2024-11-02 23:48:37.287183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:43.394 [2024-11-02 23:48:37.287230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.394 [2024-11-02 23:48:37.289716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.394 [2024-11-02 23:48:37.289861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.394 pt1 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.394 malloc2 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.394 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.395 [2024-11-02 23:48:37.326163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.395 [2024-11-02 23:48:37.326340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.395 [2024-11-02 23:48:37.326387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:43.395 [2024-11-02 23:48:37.326434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.395 [2024-11-02 23:48:37.329104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.395 [2024-11-02 23:48:37.329197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.395 pt2 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.395 malloc3 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.395 [2024-11-02 23:48:37.361122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:43.395 [2024-11-02 23:48:37.361284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.395 [2024-11-02 23:48:37.361331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:43.395 [2024-11-02 23:48:37.361404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.395 [2024-11-02 23:48:37.364056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.395 [2024-11-02 23:48:37.364152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:43.395 pt3 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.395 [2024-11-02 23:48:37.373189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.395 [2024-11-02 23:48:37.375505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.395 [2024-11-02 23:48:37.375636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:43.395 [2024-11-02 23:48:37.375875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:43.395 [2024-11-02 23:48:37.375929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:43.395 [2024-11-02 23:48:37.376264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:43.395 [2024-11-02 23:48:37.376475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:43.395 [2024-11-02 23:48:37.376529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:43.395 [2024-11-02 23:48:37.376728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.395 "name": "raid_bdev1", 00:08:43.395 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:43.395 "strip_size_kb": 0, 00:08:43.395 "state": "online", 00:08:43.395 "raid_level": "raid1", 00:08:43.395 "superblock": true, 00:08:43.395 "num_base_bdevs": 3, 00:08:43.395 "num_base_bdevs_discovered": 3, 00:08:43.395 "num_base_bdevs_operational": 3, 00:08:43.395 "base_bdevs_list": [ 00:08:43.395 { 00:08:43.395 "name": "pt1", 00:08:43.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.395 "is_configured": true, 00:08:43.395 "data_offset": 2048, 00:08:43.395 "data_size": 63488 00:08:43.395 }, 00:08:43.395 { 00:08:43.395 "name": "pt2", 00:08:43.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.395 "is_configured": true, 00:08:43.395 "data_offset": 2048, 00:08:43.395 "data_size": 63488 00:08:43.395 }, 00:08:43.395 { 00:08:43.395 "name": "pt3", 00:08:43.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.395 "is_configured": true, 00:08:43.395 "data_offset": 2048, 00:08:43.395 "data_size": 63488 00:08:43.395 } 00:08:43.395 ] 00:08:43.395 }' 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.395 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.965 [2024-11-02 23:48:37.773004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.965 "name": "raid_bdev1", 00:08:43.965 "aliases": [ 00:08:43.965 "0266467d-c5d4-4033-b54d-b229a3a1ce47" 00:08:43.965 ], 00:08:43.965 "product_name": "Raid Volume", 00:08:43.965 "block_size": 512, 00:08:43.965 "num_blocks": 63488, 00:08:43.965 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:43.965 "assigned_rate_limits": { 00:08:43.965 "rw_ios_per_sec": 0, 00:08:43.965 "rw_mbytes_per_sec": 0, 00:08:43.965 "r_mbytes_per_sec": 0, 00:08:43.965 "w_mbytes_per_sec": 0 00:08:43.965 }, 00:08:43.965 "claimed": false, 00:08:43.965 "zoned": false, 00:08:43.965 "supported_io_types": { 00:08:43.965 "read": true, 00:08:43.965 "write": true, 00:08:43.965 "unmap": false, 00:08:43.965 "flush": false, 00:08:43.965 "reset": true, 00:08:43.965 "nvme_admin": false, 00:08:43.965 "nvme_io": false, 00:08:43.965 "nvme_io_md": false, 00:08:43.965 "write_zeroes": true, 00:08:43.965 "zcopy": false, 00:08:43.965 "get_zone_info": false, 00:08:43.965 "zone_management": false, 00:08:43.965 "zone_append": false, 00:08:43.965 "compare": false, 00:08:43.965 "compare_and_write": false, 00:08:43.965 "abort": false, 00:08:43.965 "seek_hole": false, 00:08:43.965 "seek_data": false, 00:08:43.965 "copy": false, 00:08:43.965 "nvme_iov_md": false 00:08:43.965 }, 00:08:43.965 "memory_domains": [ 00:08:43.965 { 00:08:43.965 "dma_device_id": "system", 00:08:43.965 "dma_device_type": 1 00:08:43.965 }, 00:08:43.965 { 00:08:43.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.965 "dma_device_type": 2 00:08:43.965 }, 00:08:43.965 { 00:08:43.965 "dma_device_id": "system", 00:08:43.965 "dma_device_type": 1 00:08:43.965 }, 00:08:43.965 { 00:08:43.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.965 "dma_device_type": 2 00:08:43.965 }, 00:08:43.965 { 00:08:43.965 "dma_device_id": "system", 00:08:43.965 "dma_device_type": 1 00:08:43.965 }, 00:08:43.965 { 00:08:43.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.965 "dma_device_type": 2 00:08:43.965 } 00:08:43.965 ], 00:08:43.965 "driver_specific": { 00:08:43.965 "raid": { 00:08:43.965 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:43.965 "strip_size_kb": 0, 00:08:43.965 "state": "online", 00:08:43.965 "raid_level": "raid1", 00:08:43.965 "superblock": true, 00:08:43.965 "num_base_bdevs": 3, 00:08:43.965 "num_base_bdevs_discovered": 3, 00:08:43.965 "num_base_bdevs_operational": 3, 00:08:43.965 "base_bdevs_list": [ 00:08:43.965 { 00:08:43.965 "name": "pt1", 00:08:43.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.965 "is_configured": true, 00:08:43.965 "data_offset": 2048, 00:08:43.965 "data_size": 63488 00:08:43.965 }, 00:08:43.965 { 00:08:43.965 "name": "pt2", 00:08:43.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.965 "is_configured": true, 00:08:43.965 "data_offset": 2048, 00:08:43.965 "data_size": 63488 00:08:43.965 }, 00:08:43.965 { 00:08:43.965 "name": "pt3", 00:08:43.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.965 "is_configured": true, 00:08:43.965 "data_offset": 2048, 00:08:43.965 "data_size": 63488 00:08:43.965 } 00:08:43.965 ] 00:08:43.965 } 00:08:43.965 } 00:08:43.965 }' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:43.965 pt2 00:08:43.965 pt3' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.965 23:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.965 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.965 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.965 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.965 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.965 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.965 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:43.965 [2024-11-02 23:48:38.012421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.965 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.965 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0266467d-c5d4-4033-b54d-b229a3a1ce47 00:08:43.965 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0266467d-c5d4-4033-b54d-b229a3a1ce47 ']' 00:08:43.965 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:43.966 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.966 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 [2024-11-02 23:48:38.060042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.225 [2024-11-02 23:48:38.060095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.225 [2024-11-02 23:48:38.060216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.225 [2024-11-02 23:48:38.060316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.225 [2024-11-02 23:48:38.060333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:44.225 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.226 [2024-11-02 23:48:38.207937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:44.226 [2024-11-02 23:48:38.210430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:44.226 [2024-11-02 23:48:38.210538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:44.226 [2024-11-02 23:48:38.210655] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:44.226 [2024-11-02 23:48:38.210794] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:44.226 [2024-11-02 23:48:38.210873] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:44.226 [2024-11-02 23:48:38.210930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.226 [2024-11-02 23:48:38.210970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:44.226 request: 00:08:44.226 { 00:08:44.226 "name": "raid_bdev1", 00:08:44.226 "raid_level": "raid1", 00:08:44.226 "base_bdevs": [ 00:08:44.226 "malloc1", 00:08:44.226 "malloc2", 00:08:44.226 "malloc3" 00:08:44.226 ], 00:08:44.226 "superblock": false, 00:08:44.226 "method": "bdev_raid_create", 00:08:44.226 "req_id": 1 00:08:44.226 } 00:08:44.226 Got JSON-RPC error response 00:08:44.226 response: 00:08:44.226 { 00:08:44.226 "code": -17, 00:08:44.226 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:44.226 } 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.226 [2024-11-02 23:48:38.275726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.226 [2024-11-02 23:48:38.275841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.226 [2024-11-02 23:48:38.275868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:44.226 [2024-11-02 23:48:38.275884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.226 [2024-11-02 23:48:38.278580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.226 [2024-11-02 23:48:38.278634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.226 [2024-11-02 23:48:38.278763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:44.226 [2024-11-02 23:48:38.278814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.226 pt1 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.226 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.484 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.484 "name": "raid_bdev1", 00:08:44.484 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:44.484 "strip_size_kb": 0, 00:08:44.484 "state": "configuring", 00:08:44.484 "raid_level": "raid1", 00:08:44.484 "superblock": true, 00:08:44.484 "num_base_bdevs": 3, 00:08:44.484 "num_base_bdevs_discovered": 1, 00:08:44.484 "num_base_bdevs_operational": 3, 00:08:44.484 "base_bdevs_list": [ 00:08:44.484 { 00:08:44.484 "name": "pt1", 00:08:44.484 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.484 "is_configured": true, 00:08:44.484 "data_offset": 2048, 00:08:44.484 "data_size": 63488 00:08:44.484 }, 00:08:44.484 { 00:08:44.484 "name": null, 00:08:44.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.484 "is_configured": false, 00:08:44.484 "data_offset": 2048, 00:08:44.484 "data_size": 63488 00:08:44.484 }, 00:08:44.484 { 00:08:44.484 "name": null, 00:08:44.484 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.484 "is_configured": false, 00:08:44.484 "data_offset": 2048, 00:08:44.484 "data_size": 63488 00:08:44.484 } 00:08:44.484 ] 00:08:44.484 }' 00:08:44.484 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.484 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.743 [2024-11-02 23:48:38.718987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.743 [2024-11-02 23:48:38.719207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.743 [2024-11-02 23:48:38.719259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:44.743 [2024-11-02 23:48:38.719327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.743 [2024-11-02 23:48:38.719911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.743 [2024-11-02 23:48:38.719996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.743 [2024-11-02 23:48:38.720148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.743 [2024-11-02 23:48:38.720218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.743 pt2 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.743 [2024-11-02 23:48:38.731002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.743 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.744 "name": "raid_bdev1", 00:08:44.744 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:44.744 "strip_size_kb": 0, 00:08:44.744 "state": "configuring", 00:08:44.744 "raid_level": "raid1", 00:08:44.744 "superblock": true, 00:08:44.744 "num_base_bdevs": 3, 00:08:44.744 "num_base_bdevs_discovered": 1, 00:08:44.744 "num_base_bdevs_operational": 3, 00:08:44.744 "base_bdevs_list": [ 00:08:44.744 { 00:08:44.744 "name": "pt1", 00:08:44.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.744 "is_configured": true, 00:08:44.744 "data_offset": 2048, 00:08:44.744 "data_size": 63488 00:08:44.744 }, 00:08:44.744 { 00:08:44.744 "name": null, 00:08:44.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.744 "is_configured": false, 00:08:44.744 "data_offset": 0, 00:08:44.744 "data_size": 63488 00:08:44.744 }, 00:08:44.744 { 00:08:44.744 "name": null, 00:08:44.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.744 "is_configured": false, 00:08:44.744 "data_offset": 2048, 00:08:44.744 "data_size": 63488 00:08:44.744 } 00:08:44.744 ] 00:08:44.744 }' 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.744 23:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.313 [2024-11-02 23:48:39.178394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.313 [2024-11-02 23:48:39.178492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.313 [2024-11-02 23:48:39.178523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:45.313 [2024-11-02 23:48:39.178535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.313 [2024-11-02 23:48:39.179109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.313 [2024-11-02 23:48:39.179132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.313 [2024-11-02 23:48:39.179241] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.313 [2024-11-02 23:48:39.179269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.313 pt2 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.313 [2024-11-02 23:48:39.190323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:45.313 [2024-11-02 23:48:39.190414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.313 [2024-11-02 23:48:39.190443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:45.313 [2024-11-02 23:48:39.190454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.313 [2024-11-02 23:48:39.190904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.313 [2024-11-02 23:48:39.190924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:45.313 [2024-11-02 23:48:39.191010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:45.313 [2024-11-02 23:48:39.191039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:45.313 [2024-11-02 23:48:39.191157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:45.313 [2024-11-02 23:48:39.191173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.313 [2024-11-02 23:48:39.191448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:45.313 [2024-11-02 23:48:39.191595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:45.313 [2024-11-02 23:48:39.191609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:45.313 [2024-11-02 23:48:39.191731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.313 pt3 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.313 "name": "raid_bdev1", 00:08:45.313 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:45.313 "strip_size_kb": 0, 00:08:45.313 "state": "online", 00:08:45.313 "raid_level": "raid1", 00:08:45.313 "superblock": true, 00:08:45.313 "num_base_bdevs": 3, 00:08:45.313 "num_base_bdevs_discovered": 3, 00:08:45.313 "num_base_bdevs_operational": 3, 00:08:45.313 "base_bdevs_list": [ 00:08:45.313 { 00:08:45.313 "name": "pt1", 00:08:45.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.313 "is_configured": true, 00:08:45.313 "data_offset": 2048, 00:08:45.313 "data_size": 63488 00:08:45.313 }, 00:08:45.313 { 00:08:45.313 "name": "pt2", 00:08:45.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.313 "is_configured": true, 00:08:45.313 "data_offset": 2048, 00:08:45.313 "data_size": 63488 00:08:45.313 }, 00:08:45.313 { 00:08:45.313 "name": "pt3", 00:08:45.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.313 "is_configured": true, 00:08:45.313 "data_offset": 2048, 00:08:45.313 "data_size": 63488 00:08:45.313 } 00:08:45.313 ] 00:08:45.313 }' 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.313 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.572 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:45.572 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:45.572 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.572 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.572 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.572 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.572 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.572 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.572 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.572 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.572 [2024-11-02 23:48:39.649948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.831 "name": "raid_bdev1", 00:08:45.831 "aliases": [ 00:08:45.831 "0266467d-c5d4-4033-b54d-b229a3a1ce47" 00:08:45.831 ], 00:08:45.831 "product_name": "Raid Volume", 00:08:45.831 "block_size": 512, 00:08:45.831 "num_blocks": 63488, 00:08:45.831 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:45.831 "assigned_rate_limits": { 00:08:45.831 "rw_ios_per_sec": 0, 00:08:45.831 "rw_mbytes_per_sec": 0, 00:08:45.831 "r_mbytes_per_sec": 0, 00:08:45.831 "w_mbytes_per_sec": 0 00:08:45.831 }, 00:08:45.831 "claimed": false, 00:08:45.831 "zoned": false, 00:08:45.831 "supported_io_types": { 00:08:45.831 "read": true, 00:08:45.831 "write": true, 00:08:45.831 "unmap": false, 00:08:45.831 "flush": false, 00:08:45.831 "reset": true, 00:08:45.831 "nvme_admin": false, 00:08:45.831 "nvme_io": false, 00:08:45.831 "nvme_io_md": false, 00:08:45.831 "write_zeroes": true, 00:08:45.831 "zcopy": false, 00:08:45.831 "get_zone_info": false, 00:08:45.831 "zone_management": false, 00:08:45.831 "zone_append": false, 00:08:45.831 "compare": false, 00:08:45.831 "compare_and_write": false, 00:08:45.831 "abort": false, 00:08:45.831 "seek_hole": false, 00:08:45.831 "seek_data": false, 00:08:45.831 "copy": false, 00:08:45.831 "nvme_iov_md": false 00:08:45.831 }, 00:08:45.831 "memory_domains": [ 00:08:45.831 { 00:08:45.831 "dma_device_id": "system", 00:08:45.831 "dma_device_type": 1 00:08:45.831 }, 00:08:45.831 { 00:08:45.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.831 "dma_device_type": 2 00:08:45.831 }, 00:08:45.831 { 00:08:45.831 "dma_device_id": "system", 00:08:45.831 "dma_device_type": 1 00:08:45.831 }, 00:08:45.831 { 00:08:45.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.831 "dma_device_type": 2 00:08:45.831 }, 00:08:45.831 { 00:08:45.831 "dma_device_id": "system", 00:08:45.831 "dma_device_type": 1 00:08:45.831 }, 00:08:45.831 { 00:08:45.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.831 "dma_device_type": 2 00:08:45.831 } 00:08:45.831 ], 00:08:45.831 "driver_specific": { 00:08:45.831 "raid": { 00:08:45.831 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:45.831 "strip_size_kb": 0, 00:08:45.831 "state": "online", 00:08:45.831 "raid_level": "raid1", 00:08:45.831 "superblock": true, 00:08:45.831 "num_base_bdevs": 3, 00:08:45.831 "num_base_bdevs_discovered": 3, 00:08:45.831 "num_base_bdevs_operational": 3, 00:08:45.831 "base_bdevs_list": [ 00:08:45.831 { 00:08:45.831 "name": "pt1", 00:08:45.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.831 "is_configured": true, 00:08:45.831 "data_offset": 2048, 00:08:45.831 "data_size": 63488 00:08:45.831 }, 00:08:45.831 { 00:08:45.831 "name": "pt2", 00:08:45.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.831 "is_configured": true, 00:08:45.831 "data_offset": 2048, 00:08:45.831 "data_size": 63488 00:08:45.831 }, 00:08:45.831 { 00:08:45.831 "name": "pt3", 00:08:45.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.831 "is_configured": true, 00:08:45.831 "data_offset": 2048, 00:08:45.831 "data_size": 63488 00:08:45.831 } 00:08:45.831 ] 00:08:45.831 } 00:08:45.831 } 00:08:45.831 }' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:45.831 pt2 00:08:45.831 pt3' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.831 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.831 [2024-11-02 23:48:39.905420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0266467d-c5d4-4033-b54d-b229a3a1ce47 '!=' 0266467d-c5d4-4033-b54d-b229a3a1ce47 ']' 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.089 [2024-11-02 23:48:39.949112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.089 23:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.089 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.089 "name": "raid_bdev1", 00:08:46.089 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:46.089 "strip_size_kb": 0, 00:08:46.089 "state": "online", 00:08:46.089 "raid_level": "raid1", 00:08:46.089 "superblock": true, 00:08:46.089 "num_base_bdevs": 3, 00:08:46.089 "num_base_bdevs_discovered": 2, 00:08:46.089 "num_base_bdevs_operational": 2, 00:08:46.089 "base_bdevs_list": [ 00:08:46.089 { 00:08:46.089 "name": null, 00:08:46.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.089 "is_configured": false, 00:08:46.089 "data_offset": 0, 00:08:46.089 "data_size": 63488 00:08:46.089 }, 00:08:46.089 { 00:08:46.089 "name": "pt2", 00:08:46.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.089 "is_configured": true, 00:08:46.089 "data_offset": 2048, 00:08:46.089 "data_size": 63488 00:08:46.089 }, 00:08:46.089 { 00:08:46.089 "name": "pt3", 00:08:46.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.089 "is_configured": true, 00:08:46.089 "data_offset": 2048, 00:08:46.089 "data_size": 63488 00:08:46.089 } 00:08:46.089 ] 00:08:46.089 }' 00:08:46.089 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.089 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.347 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.347 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.347 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.347 [2024-11-02 23:48:40.404309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.347 [2024-11-02 23:48:40.404464] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.347 [2024-11-02 23:48:40.404598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.347 [2024-11-02 23:48:40.404681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.347 [2024-11-02 23:48:40.404694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:46.347 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.347 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.347 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.347 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:46.347 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.347 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.607 [2024-11-02 23:48:40.488094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:46.607 [2024-11-02 23:48:40.488174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.607 [2024-11-02 23:48:40.488200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:46.607 [2024-11-02 23:48:40.488211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.607 [2024-11-02 23:48:40.490865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.607 [2024-11-02 23:48:40.490975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:46.607 [2024-11-02 23:48:40.491090] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:46.607 [2024-11-02 23:48:40.491134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.607 pt2 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.607 "name": "raid_bdev1", 00:08:46.607 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:46.607 "strip_size_kb": 0, 00:08:46.607 "state": "configuring", 00:08:46.607 "raid_level": "raid1", 00:08:46.607 "superblock": true, 00:08:46.607 "num_base_bdevs": 3, 00:08:46.607 "num_base_bdevs_discovered": 1, 00:08:46.607 "num_base_bdevs_operational": 2, 00:08:46.607 "base_bdevs_list": [ 00:08:46.607 { 00:08:46.607 "name": null, 00:08:46.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.607 "is_configured": false, 00:08:46.607 "data_offset": 2048, 00:08:46.607 "data_size": 63488 00:08:46.607 }, 00:08:46.607 { 00:08:46.607 "name": "pt2", 00:08:46.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.607 "is_configured": true, 00:08:46.607 "data_offset": 2048, 00:08:46.607 "data_size": 63488 00:08:46.607 }, 00:08:46.607 { 00:08:46.607 "name": null, 00:08:46.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.607 "is_configured": false, 00:08:46.607 "data_offset": 2048, 00:08:46.607 "data_size": 63488 00:08:46.607 } 00:08:46.607 ] 00:08:46.607 }' 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.607 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.867 [2024-11-02 23:48:40.927450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:46.867 [2024-11-02 23:48:40.927639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.867 [2024-11-02 23:48:40.927693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:46.867 [2024-11-02 23:48:40.927801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.867 [2024-11-02 23:48:40.928414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.867 [2024-11-02 23:48:40.928497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:46.867 [2024-11-02 23:48:40.928656] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:46.867 [2024-11-02 23:48:40.928737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:46.867 [2024-11-02 23:48:40.928934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:46.867 [2024-11-02 23:48:40.928986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.867 [2024-11-02 23:48:40.929358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:46.867 [2024-11-02 23:48:40.929584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:46.867 [2024-11-02 23:48:40.929645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:46.867 [2024-11-02 23:48:40.929861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.867 pt3 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.867 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.126 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.126 "name": "raid_bdev1", 00:08:47.126 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:47.126 "strip_size_kb": 0, 00:08:47.126 "state": "online", 00:08:47.126 "raid_level": "raid1", 00:08:47.126 "superblock": true, 00:08:47.126 "num_base_bdevs": 3, 00:08:47.126 "num_base_bdevs_discovered": 2, 00:08:47.126 "num_base_bdevs_operational": 2, 00:08:47.126 "base_bdevs_list": [ 00:08:47.126 { 00:08:47.126 "name": null, 00:08:47.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.126 "is_configured": false, 00:08:47.126 "data_offset": 2048, 00:08:47.126 "data_size": 63488 00:08:47.126 }, 00:08:47.126 { 00:08:47.126 "name": "pt2", 00:08:47.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.126 "is_configured": true, 00:08:47.126 "data_offset": 2048, 00:08:47.126 "data_size": 63488 00:08:47.126 }, 00:08:47.126 { 00:08:47.126 "name": "pt3", 00:08:47.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.126 "is_configured": true, 00:08:47.126 "data_offset": 2048, 00:08:47.126 "data_size": 63488 00:08:47.126 } 00:08:47.126 ] 00:08:47.126 }' 00:08:47.126 23:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.126 23:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.386 [2024-11-02 23:48:41.354837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.386 [2024-11-02 23:48:41.354979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.386 [2024-11-02 23:48:41.355099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.386 [2024-11-02 23:48:41.355178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.386 [2024-11-02 23:48:41.355193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.386 [2024-11-02 23:48:41.426645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:47.386 [2024-11-02 23:48:41.426725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.386 [2024-11-02 23:48:41.426764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:47.386 [2024-11-02 23:48:41.426780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.386 [2024-11-02 23:48:41.429360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.386 [2024-11-02 23:48:41.429454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:47.386 [2024-11-02 23:48:41.429558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:47.386 [2024-11-02 23:48:41.429620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.386 [2024-11-02 23:48:41.429773] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:47.386 [2024-11-02 23:48:41.429809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.386 [2024-11-02 23:48:41.429828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:47.386 [2024-11-02 23:48:41.429889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.386 pt1 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.386 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.651 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.651 "name": "raid_bdev1", 00:08:47.651 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:47.651 "strip_size_kb": 0, 00:08:47.651 "state": "configuring", 00:08:47.651 "raid_level": "raid1", 00:08:47.651 "superblock": true, 00:08:47.651 "num_base_bdevs": 3, 00:08:47.651 "num_base_bdevs_discovered": 1, 00:08:47.651 "num_base_bdevs_operational": 2, 00:08:47.651 "base_bdevs_list": [ 00:08:47.651 { 00:08:47.651 "name": null, 00:08:47.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.651 "is_configured": false, 00:08:47.651 "data_offset": 2048, 00:08:47.651 "data_size": 63488 00:08:47.651 }, 00:08:47.651 { 00:08:47.651 "name": "pt2", 00:08:47.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.651 "is_configured": true, 00:08:47.651 "data_offset": 2048, 00:08:47.651 "data_size": 63488 00:08:47.651 }, 00:08:47.651 { 00:08:47.651 "name": null, 00:08:47.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.651 "is_configured": false, 00:08:47.651 "data_offset": 2048, 00:08:47.651 "data_size": 63488 00:08:47.651 } 00:08:47.651 ] 00:08:47.651 }' 00:08:47.651 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.651 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.917 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:08:47.917 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.917 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.917 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:47.917 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.917 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:08:47.917 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:47.917 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.917 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.917 [2024-11-02 23:48:41.941884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:47.917 [2024-11-02 23:48:41.941983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.917 [2024-11-02 23:48:41.942010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:08:47.917 [2024-11-02 23:48:41.942026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.917 [2024-11-02 23:48:41.942589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.917 [2024-11-02 23:48:41.942619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:47.917 [2024-11-02 23:48:41.942721] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:47.917 [2024-11-02 23:48:41.942772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:47.917 [2024-11-02 23:48:41.942901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:47.917 [2024-11-02 23:48:41.942915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.917 [2024-11-02 23:48:41.943181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:08:47.917 [2024-11-02 23:48:41.943347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:47.918 [2024-11-02 23:48:41.943357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:47.918 [2024-11-02 23:48:41.943492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.918 pt3 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.918 "name": "raid_bdev1", 00:08:47.918 "uuid": "0266467d-c5d4-4033-b54d-b229a3a1ce47", 00:08:47.918 "strip_size_kb": 0, 00:08:47.918 "state": "online", 00:08:47.918 "raid_level": "raid1", 00:08:47.918 "superblock": true, 00:08:47.918 "num_base_bdevs": 3, 00:08:47.918 "num_base_bdevs_discovered": 2, 00:08:47.918 "num_base_bdevs_operational": 2, 00:08:47.918 "base_bdevs_list": [ 00:08:47.918 { 00:08:47.918 "name": null, 00:08:47.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.918 "is_configured": false, 00:08:47.918 "data_offset": 2048, 00:08:47.918 "data_size": 63488 00:08:47.918 }, 00:08:47.918 { 00:08:47.918 "name": "pt2", 00:08:47.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.918 "is_configured": true, 00:08:47.918 "data_offset": 2048, 00:08:47.918 "data_size": 63488 00:08:47.918 }, 00:08:47.918 { 00:08:47.918 "name": "pt3", 00:08:47.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.918 "is_configured": true, 00:08:47.918 "data_offset": 2048, 00:08:47.918 "data_size": 63488 00:08:47.918 } 00:08:47.918 ] 00:08:47.918 }' 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.918 23:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.484 [2024-11-02 23:48:42.417400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0266467d-c5d4-4033-b54d-b229a3a1ce47 '!=' 0266467d-c5d4-4033-b54d-b229a3a1ce47 ']' 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79479 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 79479 ']' 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 79479 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79479 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79479' 00:08:48.484 killing process with pid 79479 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 79479 00:08:48.484 [2024-11-02 23:48:42.502157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.484 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 79479 00:08:48.484 [2024-11-02 23:48:42.502384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.484 [2024-11-02 23:48:42.502472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.484 [2024-11-02 23:48:42.502484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:48.484 [2024-11-02 23:48:42.566295] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.052 23:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:49.052 00:08:49.052 real 0m6.543s 00:08:49.052 user 0m10.750s 00:08:49.052 sys 0m1.436s 00:08:49.052 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.052 ************************************ 00:08:49.052 END TEST raid_superblock_test 00:08:49.052 ************************************ 00:08:49.052 23:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.052 23:48:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:08:49.052 23:48:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:49.052 23:48:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.052 23:48:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.052 ************************************ 00:08:49.052 START TEST raid_read_error_test 00:08:49.052 ************************************ 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8biz9vDNH8 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79909 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79909 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 79909 ']' 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:49.052 23:48:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.052 [2024-11-02 23:48:43.064246] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:49.052 [2024-11-02 23:48:43.064390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79909 ] 00:08:49.311 [2024-11-02 23:48:43.222537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.311 [2024-11-02 23:48:43.264344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.311 [2024-11-02 23:48:43.341595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.311 [2024-11-02 23:48:43.341735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.879 BaseBdev1_malloc 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.879 true 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.879 [2024-11-02 23:48:43.937048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:49.879 [2024-11-02 23:48:43.937129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.879 [2024-11-02 23:48:43.937156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:49.879 [2024-11-02 23:48:43.937168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.879 [2024-11-02 23:48:43.939863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.879 [2024-11-02 23:48:43.939908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:49.879 BaseBdev1 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.879 BaseBdev2_malloc 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.879 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.138 true 00:08:50.138 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.138 23:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:50.138 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.138 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.138 [2024-11-02 23:48:43.983952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:50.138 [2024-11-02 23:48:43.984022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.138 [2024-11-02 23:48:43.984044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:50.138 [2024-11-02 23:48:43.984066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.138 [2024-11-02 23:48:43.986508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.138 BaseBdev2 00:08:50.138 [2024-11-02 23:48:43.986634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:50.138 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.138 23:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.138 23:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:50.138 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.138 23:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.138 BaseBdev3_malloc 00:08:50.138 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.138 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:50.138 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.138 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.138 true 00:08:50.138 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.138 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:50.138 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.138 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.139 [2024-11-02 23:48:44.030860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:50.139 [2024-11-02 23:48:44.031010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.139 [2024-11-02 23:48:44.031039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:50.139 [2024-11-02 23:48:44.031051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.139 [2024-11-02 23:48:44.033626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.139 [2024-11-02 23:48:44.033723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:50.139 BaseBdev3 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.139 [2024-11-02 23:48:44.042941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.139 [2024-11-02 23:48:44.045103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.139 [2024-11-02 23:48:44.045253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.139 [2024-11-02 23:48:44.045464] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:50.139 [2024-11-02 23:48:44.045482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:50.139 [2024-11-02 23:48:44.045768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:50.139 [2024-11-02 23:48:44.045943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:50.139 [2024-11-02 23:48:44.045955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:50.139 [2024-11-02 23:48:44.046133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.139 "name": "raid_bdev1", 00:08:50.139 "uuid": "81d6f89c-cf02-4e8f-95b7-9c6203fe2d7f", 00:08:50.139 "strip_size_kb": 0, 00:08:50.139 "state": "online", 00:08:50.139 "raid_level": "raid1", 00:08:50.139 "superblock": true, 00:08:50.139 "num_base_bdevs": 3, 00:08:50.139 "num_base_bdevs_discovered": 3, 00:08:50.139 "num_base_bdevs_operational": 3, 00:08:50.139 "base_bdevs_list": [ 00:08:50.139 { 00:08:50.139 "name": "BaseBdev1", 00:08:50.139 "uuid": "285ea044-e86d-50dc-bffb-d8b92f8f860d", 00:08:50.139 "is_configured": true, 00:08:50.139 "data_offset": 2048, 00:08:50.139 "data_size": 63488 00:08:50.139 }, 00:08:50.139 { 00:08:50.139 "name": "BaseBdev2", 00:08:50.139 "uuid": "4b05b479-7e20-5e83-95ff-f13e2d8e0c6c", 00:08:50.139 "is_configured": true, 00:08:50.139 "data_offset": 2048, 00:08:50.139 "data_size": 63488 00:08:50.139 }, 00:08:50.139 { 00:08:50.139 "name": "BaseBdev3", 00:08:50.139 "uuid": "0b0d9219-4e1d-5fef-b6a5-67c79b8ef52b", 00:08:50.139 "is_configured": true, 00:08:50.139 "data_offset": 2048, 00:08:50.139 "data_size": 63488 00:08:50.139 } 00:08:50.139 ] 00:08:50.139 }' 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.139 23:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.398 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:50.398 23:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:50.656 [2024-11-02 23:48:44.570757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.594 "name": "raid_bdev1", 00:08:51.594 "uuid": "81d6f89c-cf02-4e8f-95b7-9c6203fe2d7f", 00:08:51.594 "strip_size_kb": 0, 00:08:51.594 "state": "online", 00:08:51.594 "raid_level": "raid1", 00:08:51.594 "superblock": true, 00:08:51.594 "num_base_bdevs": 3, 00:08:51.594 "num_base_bdevs_discovered": 3, 00:08:51.594 "num_base_bdevs_operational": 3, 00:08:51.594 "base_bdevs_list": [ 00:08:51.594 { 00:08:51.594 "name": "BaseBdev1", 00:08:51.594 "uuid": "285ea044-e86d-50dc-bffb-d8b92f8f860d", 00:08:51.594 "is_configured": true, 00:08:51.594 "data_offset": 2048, 00:08:51.594 "data_size": 63488 00:08:51.594 }, 00:08:51.594 { 00:08:51.594 "name": "BaseBdev2", 00:08:51.594 "uuid": "4b05b479-7e20-5e83-95ff-f13e2d8e0c6c", 00:08:51.594 "is_configured": true, 00:08:51.594 "data_offset": 2048, 00:08:51.594 "data_size": 63488 00:08:51.594 }, 00:08:51.594 { 00:08:51.594 "name": "BaseBdev3", 00:08:51.594 "uuid": "0b0d9219-4e1d-5fef-b6a5-67c79b8ef52b", 00:08:51.594 "is_configured": true, 00:08:51.594 "data_offset": 2048, 00:08:51.594 "data_size": 63488 00:08:51.594 } 00:08:51.594 ] 00:08:51.594 }' 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.594 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.851 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.851 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.851 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.110 [2024-11-02 23:48:45.947011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.110 [2024-11-02 23:48:45.947065] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.110 [2024-11-02 23:48:45.949630] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.110 [2024-11-02 23:48:45.949693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.110 [2024-11-02 23:48:45.949833] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.110 [2024-11-02 23:48:45.949856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:52.110 { 00:08:52.110 "results": [ 00:08:52.110 { 00:08:52.110 "job": "raid_bdev1", 00:08:52.110 "core_mask": "0x1", 00:08:52.110 "workload": "randrw", 00:08:52.110 "percentage": 50, 00:08:52.110 "status": "finished", 00:08:52.110 "queue_depth": 1, 00:08:52.110 "io_size": 131072, 00:08:52.110 "runtime": 1.376524, 00:08:52.110 "iops": 10668.17578189701, 00:08:52.110 "mibps": 1333.5219727371264, 00:08:52.110 "io_failed": 0, 00:08:52.110 "io_timeout": 0, 00:08:52.110 "avg_latency_us": 90.93137904733018, 00:08:52.110 "min_latency_us": 24.593886462882097, 00:08:52.110 "max_latency_us": 1473.844541484716 00:08:52.110 } 00:08:52.110 ], 00:08:52.110 "core_count": 1 00:08:52.110 } 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79909 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 79909 ']' 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 79909 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79909 00:08:52.110 killing process with pid 79909 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79909' 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 79909 00:08:52.110 [2024-11-02 23:48:45.996600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.110 23:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 79909 00:08:52.110 [2024-11-02 23:48:46.046196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.369 23:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8biz9vDNH8 00:08:52.369 23:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:52.369 23:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:52.369 23:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:52.369 23:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:52.369 23:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.369 23:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:52.369 23:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:52.369 00:08:52.369 real 0m3.417s 00:08:52.369 user 0m4.222s 00:08:52.369 sys 0m0.609s 00:08:52.369 ************************************ 00:08:52.369 END TEST raid_read_error_test 00:08:52.369 ************************************ 00:08:52.369 23:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:52.369 23:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.370 23:48:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:08:52.370 23:48:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:52.370 23:48:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:52.370 23:48:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.370 ************************************ 00:08:52.370 START TEST raid_write_error_test 00:08:52.370 ************************************ 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.370 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Hfc0NjCGY5 00:08:52.628 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80039 00:08:52.629 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:52.629 23:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80039 00:08:52.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.629 23:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 80039 ']' 00:08:52.629 23:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.629 23:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:52.629 23:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.629 23:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:52.629 23:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 [2024-11-02 23:48:46.553640] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:52.629 [2024-11-02 23:48:46.553771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80039 ] 00:08:52.629 [2024-11-02 23:48:46.710130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.887 [2024-11-02 23:48:46.750831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.887 [2024-11-02 23:48:46.827813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.887 [2024-11-02 23:48:46.827868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.455 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.455 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:53.455 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.456 BaseBdev1_malloc 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.456 true 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.456 [2024-11-02 23:48:47.416320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:53.456 [2024-11-02 23:48:47.416410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.456 [2024-11-02 23:48:47.416448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:53.456 [2024-11-02 23:48:47.416460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.456 [2024-11-02 23:48:47.419044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.456 [2024-11-02 23:48:47.419171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:53.456 BaseBdev1 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.456 BaseBdev2_malloc 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.456 true 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.456 [2024-11-02 23:48:47.463483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:53.456 [2024-11-02 23:48:47.463555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.456 [2024-11-02 23:48:47.463579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:53.456 [2024-11-02 23:48:47.463612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.456 [2024-11-02 23:48:47.466094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.456 [2024-11-02 23:48:47.466134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:53.456 BaseBdev2 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.456 BaseBdev3_malloc 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.456 true 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.456 [2024-11-02 23:48:47.511508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:53.456 [2024-11-02 23:48:47.511660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.456 [2024-11-02 23:48:47.511688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:53.456 [2024-11-02 23:48:47.511699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.456 [2024-11-02 23:48:47.514144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.456 [2024-11-02 23:48:47.514201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:53.456 BaseBdev3 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.456 [2024-11-02 23:48:47.523580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.456 [2024-11-02 23:48:47.525851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.456 [2024-11-02 23:48:47.525982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.456 [2024-11-02 23:48:47.526206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:53.456 [2024-11-02 23:48:47.526229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:53.456 [2024-11-02 23:48:47.526509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:53.456 [2024-11-02 23:48:47.526692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:53.456 [2024-11-02 23:48:47.526703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:53.456 [2024-11-02 23:48:47.526890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.456 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.716 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.716 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.716 "name": "raid_bdev1", 00:08:53.716 "uuid": "9674dedb-dcc0-40b3-b96c-86f9a94af597", 00:08:53.716 "strip_size_kb": 0, 00:08:53.716 "state": "online", 00:08:53.716 "raid_level": "raid1", 00:08:53.716 "superblock": true, 00:08:53.716 "num_base_bdevs": 3, 00:08:53.716 "num_base_bdevs_discovered": 3, 00:08:53.716 "num_base_bdevs_operational": 3, 00:08:53.716 "base_bdevs_list": [ 00:08:53.716 { 00:08:53.716 "name": "BaseBdev1", 00:08:53.716 "uuid": "67394d5c-5215-5a2d-b612-c88305255efc", 00:08:53.716 "is_configured": true, 00:08:53.716 "data_offset": 2048, 00:08:53.716 "data_size": 63488 00:08:53.716 }, 00:08:53.716 { 00:08:53.716 "name": "BaseBdev2", 00:08:53.716 "uuid": "b533e84f-cf30-5288-8880-1cf451ef2203", 00:08:53.716 "is_configured": true, 00:08:53.716 "data_offset": 2048, 00:08:53.716 "data_size": 63488 00:08:53.716 }, 00:08:53.716 { 00:08:53.716 "name": "BaseBdev3", 00:08:53.716 "uuid": "e9ce4f30-801a-52ea-aca5-055d92737bc3", 00:08:53.716 "is_configured": true, 00:08:53.716 "data_offset": 2048, 00:08:53.716 "data_size": 63488 00:08:53.716 } 00:08:53.716 ] 00:08:53.716 }' 00:08:53.716 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.716 23:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.977 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:53.977 23:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:54.255 [2024-11-02 23:48:48.071315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.193 [2024-11-02 23:48:48.986866] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:55.193 [2024-11-02 23:48:48.987077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.193 [2024-11-02 23:48:48.987338] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.193 23:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.193 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.193 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.193 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.193 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.193 "name": "raid_bdev1", 00:08:55.193 "uuid": "9674dedb-dcc0-40b3-b96c-86f9a94af597", 00:08:55.193 "strip_size_kb": 0, 00:08:55.193 "state": "online", 00:08:55.193 "raid_level": "raid1", 00:08:55.193 "superblock": true, 00:08:55.194 "num_base_bdevs": 3, 00:08:55.194 "num_base_bdevs_discovered": 2, 00:08:55.194 "num_base_bdevs_operational": 2, 00:08:55.194 "base_bdevs_list": [ 00:08:55.194 { 00:08:55.194 "name": null, 00:08:55.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.194 "is_configured": false, 00:08:55.194 "data_offset": 0, 00:08:55.194 "data_size": 63488 00:08:55.194 }, 00:08:55.194 { 00:08:55.194 "name": "BaseBdev2", 00:08:55.194 "uuid": "b533e84f-cf30-5288-8880-1cf451ef2203", 00:08:55.194 "is_configured": true, 00:08:55.194 "data_offset": 2048, 00:08:55.194 "data_size": 63488 00:08:55.194 }, 00:08:55.194 { 00:08:55.194 "name": "BaseBdev3", 00:08:55.194 "uuid": "e9ce4f30-801a-52ea-aca5-055d92737bc3", 00:08:55.194 "is_configured": true, 00:08:55.194 "data_offset": 2048, 00:08:55.194 "data_size": 63488 00:08:55.194 } 00:08:55.194 ] 00:08:55.194 }' 00:08:55.194 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.194 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.453 [2024-11-02 23:48:49.409684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.453 [2024-11-02 23:48:49.409863] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.453 [2024-11-02 23:48:49.412529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.453 [2024-11-02 23:48:49.412645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.453 [2024-11-02 23:48:49.412781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.453 [2024-11-02 23:48:49.412838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:55.453 { 00:08:55.453 "results": [ 00:08:55.453 { 00:08:55.453 "job": "raid_bdev1", 00:08:55.453 "core_mask": "0x1", 00:08:55.453 "workload": "randrw", 00:08:55.453 "percentage": 50, 00:08:55.453 "status": "finished", 00:08:55.453 "queue_depth": 1, 00:08:55.453 "io_size": 131072, 00:08:55.453 "runtime": 1.338737, 00:08:55.453 "iops": 11785.735361015644, 00:08:55.453 "mibps": 1473.2169201269555, 00:08:55.453 "io_failed": 0, 00:08:55.453 "io_timeout": 0, 00:08:55.453 "avg_latency_us": 82.0047539523553, 00:08:55.453 "min_latency_us": 24.593886462882097, 00:08:55.453 "max_latency_us": 1438.071615720524 00:08:55.453 } 00:08:55.453 ], 00:08:55.453 "core_count": 1 00:08:55.453 } 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80039 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 80039 ']' 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 80039 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80039 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80039' 00:08:55.453 killing process with pid 80039 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 80039 00:08:55.453 [2024-11-02 23:48:49.462821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.453 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 80039 00:08:55.453 [2024-11-02 23:48:49.513879] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.024 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Hfc0NjCGY5 00:08:56.024 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:56.024 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:56.024 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:56.024 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:56.024 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.024 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:56.024 23:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:56.024 ************************************ 00:08:56.024 END TEST raid_write_error_test 00:08:56.024 ************************************ 00:08:56.024 00:08:56.024 real 0m3.394s 00:08:56.024 user 0m4.187s 00:08:56.024 sys 0m0.600s 00:08:56.024 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.024 23:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.024 23:48:49 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:56.024 23:48:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:56.024 23:48:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:08:56.024 23:48:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:56.024 23:48:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:56.024 23:48:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.024 ************************************ 00:08:56.024 START TEST raid_state_function_test 00:08:56.024 ************************************ 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:56.024 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80172 00:08:56.025 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:56.025 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80172' 00:08:56.025 Process raid pid: 80172 00:08:56.025 23:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80172 00:08:56.025 23:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80172 ']' 00:08:56.025 23:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.025 23:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:56.025 23:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.025 23:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:56.025 23:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.025 [2024-11-02 23:48:50.020729] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:08:56.025 [2024-11-02 23:48:50.020915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.285 [2024-11-02 23:48:50.155143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.285 [2024-11-02 23:48:50.195798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.285 [2024-11-02 23:48:50.272915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.285 [2024-11-02 23:48:50.272984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.855 [2024-11-02 23:48:50.897153] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.855 [2024-11-02 23:48:50.897238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.855 [2024-11-02 23:48:50.897258] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.855 [2024-11-02 23:48:50.897272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.855 [2024-11-02 23:48:50.897280] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.855 [2024-11-02 23:48:50.897295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.855 [2024-11-02 23:48:50.897302] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:56.855 [2024-11-02 23:48:50.897315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.855 23:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.115 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.115 "name": "Existed_Raid", 00:08:57.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.115 "strip_size_kb": 64, 00:08:57.115 "state": "configuring", 00:08:57.115 "raid_level": "raid0", 00:08:57.115 "superblock": false, 00:08:57.115 "num_base_bdevs": 4, 00:08:57.115 "num_base_bdevs_discovered": 0, 00:08:57.115 "num_base_bdevs_operational": 4, 00:08:57.115 "base_bdevs_list": [ 00:08:57.115 { 00:08:57.115 "name": "BaseBdev1", 00:08:57.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.115 "is_configured": false, 00:08:57.115 "data_offset": 0, 00:08:57.115 "data_size": 0 00:08:57.115 }, 00:08:57.115 { 00:08:57.115 "name": "BaseBdev2", 00:08:57.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.115 "is_configured": false, 00:08:57.115 "data_offset": 0, 00:08:57.115 "data_size": 0 00:08:57.115 }, 00:08:57.115 { 00:08:57.115 "name": "BaseBdev3", 00:08:57.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.115 "is_configured": false, 00:08:57.115 "data_offset": 0, 00:08:57.115 "data_size": 0 00:08:57.115 }, 00:08:57.115 { 00:08:57.115 "name": "BaseBdev4", 00:08:57.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.115 "is_configured": false, 00:08:57.115 "data_offset": 0, 00:08:57.115 "data_size": 0 00:08:57.115 } 00:08:57.115 ] 00:08:57.115 }' 00:08:57.115 23:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.115 23:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.378 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.378 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.378 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.378 [2024-11-02 23:48:51.348127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.378 [2024-11-02 23:48:51.348268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:57.378 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.378 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:57.378 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.378 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.378 [2024-11-02 23:48:51.356127] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.378 [2024-11-02 23:48:51.356242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.378 [2024-11-02 23:48:51.356281] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.378 [2024-11-02 23:48:51.356312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.378 [2024-11-02 23:48:51.356335] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.378 [2024-11-02 23:48:51.356363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.378 [2024-11-02 23:48:51.356385] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:57.378 [2024-11-02 23:48:51.356453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:57.378 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.379 [2024-11-02 23:48:51.379678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.379 BaseBdev1 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.379 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.379 [ 00:08:57.379 { 00:08:57.379 "name": "BaseBdev1", 00:08:57.379 "aliases": [ 00:08:57.379 "990f92ce-4f98-4bc8-be32-817cf7365a9a" 00:08:57.379 ], 00:08:57.379 "product_name": "Malloc disk", 00:08:57.379 "block_size": 512, 00:08:57.379 "num_blocks": 65536, 00:08:57.379 "uuid": "990f92ce-4f98-4bc8-be32-817cf7365a9a", 00:08:57.379 "assigned_rate_limits": { 00:08:57.379 "rw_ios_per_sec": 0, 00:08:57.379 "rw_mbytes_per_sec": 0, 00:08:57.379 "r_mbytes_per_sec": 0, 00:08:57.379 "w_mbytes_per_sec": 0 00:08:57.379 }, 00:08:57.379 "claimed": true, 00:08:57.379 "claim_type": "exclusive_write", 00:08:57.379 "zoned": false, 00:08:57.379 "supported_io_types": { 00:08:57.379 "read": true, 00:08:57.379 "write": true, 00:08:57.379 "unmap": true, 00:08:57.379 "flush": true, 00:08:57.379 "reset": true, 00:08:57.379 "nvme_admin": false, 00:08:57.379 "nvme_io": false, 00:08:57.380 "nvme_io_md": false, 00:08:57.380 "write_zeroes": true, 00:08:57.380 "zcopy": true, 00:08:57.380 "get_zone_info": false, 00:08:57.380 "zone_management": false, 00:08:57.380 "zone_append": false, 00:08:57.380 "compare": false, 00:08:57.380 "compare_and_write": false, 00:08:57.380 "abort": true, 00:08:57.380 "seek_hole": false, 00:08:57.380 "seek_data": false, 00:08:57.380 "copy": true, 00:08:57.380 "nvme_iov_md": false 00:08:57.380 }, 00:08:57.380 "memory_domains": [ 00:08:57.380 { 00:08:57.380 "dma_device_id": "system", 00:08:57.380 "dma_device_type": 1 00:08:57.380 }, 00:08:57.380 { 00:08:57.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.380 "dma_device_type": 2 00:08:57.380 } 00:08:57.380 ], 00:08:57.380 "driver_specific": {} 00:08:57.380 } 00:08:57.380 ] 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.380 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.647 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.647 "name": "Existed_Raid", 00:08:57.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.647 "strip_size_kb": 64, 00:08:57.647 "state": "configuring", 00:08:57.647 "raid_level": "raid0", 00:08:57.647 "superblock": false, 00:08:57.647 "num_base_bdevs": 4, 00:08:57.647 "num_base_bdevs_discovered": 1, 00:08:57.647 "num_base_bdevs_operational": 4, 00:08:57.647 "base_bdevs_list": [ 00:08:57.647 { 00:08:57.647 "name": "BaseBdev1", 00:08:57.647 "uuid": "990f92ce-4f98-4bc8-be32-817cf7365a9a", 00:08:57.647 "is_configured": true, 00:08:57.647 "data_offset": 0, 00:08:57.647 "data_size": 65536 00:08:57.647 }, 00:08:57.647 { 00:08:57.647 "name": "BaseBdev2", 00:08:57.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.647 "is_configured": false, 00:08:57.648 "data_offset": 0, 00:08:57.648 "data_size": 0 00:08:57.648 }, 00:08:57.648 { 00:08:57.648 "name": "BaseBdev3", 00:08:57.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.648 "is_configured": false, 00:08:57.648 "data_offset": 0, 00:08:57.648 "data_size": 0 00:08:57.648 }, 00:08:57.648 { 00:08:57.648 "name": "BaseBdev4", 00:08:57.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.648 "is_configured": false, 00:08:57.648 "data_offset": 0, 00:08:57.648 "data_size": 0 00:08:57.648 } 00:08:57.648 ] 00:08:57.648 }' 00:08:57.648 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.648 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.907 [2024-11-02 23:48:51.815011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.907 [2024-11-02 23:48:51.815088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.907 [2024-11-02 23:48:51.827004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.907 [2024-11-02 23:48:51.829277] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.907 [2024-11-02 23:48:51.829332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.907 [2024-11-02 23:48:51.829344] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.907 [2024-11-02 23:48:51.829354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.907 [2024-11-02 23:48:51.829362] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:57.907 [2024-11-02 23:48:51.829373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:57.907 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.908 "name": "Existed_Raid", 00:08:57.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.908 "strip_size_kb": 64, 00:08:57.908 "state": "configuring", 00:08:57.908 "raid_level": "raid0", 00:08:57.908 "superblock": false, 00:08:57.908 "num_base_bdevs": 4, 00:08:57.908 "num_base_bdevs_discovered": 1, 00:08:57.908 "num_base_bdevs_operational": 4, 00:08:57.908 "base_bdevs_list": [ 00:08:57.908 { 00:08:57.908 "name": "BaseBdev1", 00:08:57.908 "uuid": "990f92ce-4f98-4bc8-be32-817cf7365a9a", 00:08:57.908 "is_configured": true, 00:08:57.908 "data_offset": 0, 00:08:57.908 "data_size": 65536 00:08:57.908 }, 00:08:57.908 { 00:08:57.908 "name": "BaseBdev2", 00:08:57.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.908 "is_configured": false, 00:08:57.908 "data_offset": 0, 00:08:57.908 "data_size": 0 00:08:57.908 }, 00:08:57.908 { 00:08:57.908 "name": "BaseBdev3", 00:08:57.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.908 "is_configured": false, 00:08:57.908 "data_offset": 0, 00:08:57.908 "data_size": 0 00:08:57.908 }, 00:08:57.908 { 00:08:57.908 "name": "BaseBdev4", 00:08:57.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.908 "is_configured": false, 00:08:57.908 "data_offset": 0, 00:08:57.908 "data_size": 0 00:08:57.908 } 00:08:57.908 ] 00:08:57.908 }' 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.908 23:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.175 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.175 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.175 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 BaseBdev2 00:08:58.434 [2024-11-02 23:48:52.283199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 [ 00:08:58.434 { 00:08:58.434 "name": "BaseBdev2", 00:08:58.434 "aliases": [ 00:08:58.434 "cf40c4e4-4adf-408d-af39-483bd47ebf05" 00:08:58.434 ], 00:08:58.434 "product_name": "Malloc disk", 00:08:58.434 "block_size": 512, 00:08:58.434 "num_blocks": 65536, 00:08:58.434 "uuid": "cf40c4e4-4adf-408d-af39-483bd47ebf05", 00:08:58.434 "assigned_rate_limits": { 00:08:58.434 "rw_ios_per_sec": 0, 00:08:58.434 "rw_mbytes_per_sec": 0, 00:08:58.434 "r_mbytes_per_sec": 0, 00:08:58.434 "w_mbytes_per_sec": 0 00:08:58.434 }, 00:08:58.434 "claimed": true, 00:08:58.434 "claim_type": "exclusive_write", 00:08:58.434 "zoned": false, 00:08:58.434 "supported_io_types": { 00:08:58.434 "read": true, 00:08:58.434 "write": true, 00:08:58.434 "unmap": true, 00:08:58.434 "flush": true, 00:08:58.434 "reset": true, 00:08:58.434 "nvme_admin": false, 00:08:58.434 "nvme_io": false, 00:08:58.434 "nvme_io_md": false, 00:08:58.434 "write_zeroes": true, 00:08:58.434 "zcopy": true, 00:08:58.434 "get_zone_info": false, 00:08:58.434 "zone_management": false, 00:08:58.434 "zone_append": false, 00:08:58.434 "compare": false, 00:08:58.434 "compare_and_write": false, 00:08:58.434 "abort": true, 00:08:58.434 "seek_hole": false, 00:08:58.434 "seek_data": false, 00:08:58.434 "copy": true, 00:08:58.434 "nvme_iov_md": false 00:08:58.434 }, 00:08:58.434 "memory_domains": [ 00:08:58.434 { 00:08:58.434 "dma_device_id": "system", 00:08:58.434 "dma_device_type": 1 00:08:58.434 }, 00:08:58.434 { 00:08:58.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.434 "dma_device_type": 2 00:08:58.434 } 00:08:58.434 ], 00:08:58.434 "driver_specific": {} 00:08:58.434 } 00:08:58.434 ] 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.435 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.435 "name": "Existed_Raid", 00:08:58.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.435 "strip_size_kb": 64, 00:08:58.435 "state": "configuring", 00:08:58.435 "raid_level": "raid0", 00:08:58.435 "superblock": false, 00:08:58.435 "num_base_bdevs": 4, 00:08:58.435 "num_base_bdevs_discovered": 2, 00:08:58.435 "num_base_bdevs_operational": 4, 00:08:58.435 "base_bdevs_list": [ 00:08:58.435 { 00:08:58.435 "name": "BaseBdev1", 00:08:58.435 "uuid": "990f92ce-4f98-4bc8-be32-817cf7365a9a", 00:08:58.435 "is_configured": true, 00:08:58.435 "data_offset": 0, 00:08:58.435 "data_size": 65536 00:08:58.435 }, 00:08:58.435 { 00:08:58.435 "name": "BaseBdev2", 00:08:58.435 "uuid": "cf40c4e4-4adf-408d-af39-483bd47ebf05", 00:08:58.435 "is_configured": true, 00:08:58.435 "data_offset": 0, 00:08:58.435 "data_size": 65536 00:08:58.435 }, 00:08:58.435 { 00:08:58.435 "name": "BaseBdev3", 00:08:58.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.435 "is_configured": false, 00:08:58.435 "data_offset": 0, 00:08:58.435 "data_size": 0 00:08:58.435 }, 00:08:58.435 { 00:08:58.435 "name": "BaseBdev4", 00:08:58.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.435 "is_configured": false, 00:08:58.435 "data_offset": 0, 00:08:58.435 "data_size": 0 00:08:58.435 } 00:08:58.435 ] 00:08:58.435 }' 00:08:58.435 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.435 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.694 [2024-11-02 23:48:52.776207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.694 BaseBdev3 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.694 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.966 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.966 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:58.966 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.966 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.966 [ 00:08:58.966 { 00:08:58.966 "name": "BaseBdev3", 00:08:58.966 "aliases": [ 00:08:58.966 "a9997aa0-dfaa-44cf-a6a5-23dbc63cc754" 00:08:58.966 ], 00:08:58.966 "product_name": "Malloc disk", 00:08:58.966 "block_size": 512, 00:08:58.966 "num_blocks": 65536, 00:08:58.966 "uuid": "a9997aa0-dfaa-44cf-a6a5-23dbc63cc754", 00:08:58.966 "assigned_rate_limits": { 00:08:58.966 "rw_ios_per_sec": 0, 00:08:58.966 "rw_mbytes_per_sec": 0, 00:08:58.966 "r_mbytes_per_sec": 0, 00:08:58.966 "w_mbytes_per_sec": 0 00:08:58.966 }, 00:08:58.966 "claimed": true, 00:08:58.966 "claim_type": "exclusive_write", 00:08:58.966 "zoned": false, 00:08:58.966 "supported_io_types": { 00:08:58.966 "read": true, 00:08:58.966 "write": true, 00:08:58.966 "unmap": true, 00:08:58.966 "flush": true, 00:08:58.966 "reset": true, 00:08:58.966 "nvme_admin": false, 00:08:58.966 "nvme_io": false, 00:08:58.966 "nvme_io_md": false, 00:08:58.966 "write_zeroes": true, 00:08:58.966 "zcopy": true, 00:08:58.966 "get_zone_info": false, 00:08:58.966 "zone_management": false, 00:08:58.966 "zone_append": false, 00:08:58.966 "compare": false, 00:08:58.966 "compare_and_write": false, 00:08:58.966 "abort": true, 00:08:58.966 "seek_hole": false, 00:08:58.966 "seek_data": false, 00:08:58.966 "copy": true, 00:08:58.966 "nvme_iov_md": false 00:08:58.966 }, 00:08:58.966 "memory_domains": [ 00:08:58.966 { 00:08:58.966 "dma_device_id": "system", 00:08:58.966 "dma_device_type": 1 00:08:58.966 }, 00:08:58.966 { 00:08:58.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.966 "dma_device_type": 2 00:08:58.966 } 00:08:58.966 ], 00:08:58.967 "driver_specific": {} 00:08:58.967 } 00:08:58.967 ] 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.967 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.967 "name": "Existed_Raid", 00:08:58.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.967 "strip_size_kb": 64, 00:08:58.968 "state": "configuring", 00:08:58.968 "raid_level": "raid0", 00:08:58.968 "superblock": false, 00:08:58.968 "num_base_bdevs": 4, 00:08:58.968 "num_base_bdevs_discovered": 3, 00:08:58.968 "num_base_bdevs_operational": 4, 00:08:58.968 "base_bdevs_list": [ 00:08:58.968 { 00:08:58.968 "name": "BaseBdev1", 00:08:58.968 "uuid": "990f92ce-4f98-4bc8-be32-817cf7365a9a", 00:08:58.968 "is_configured": true, 00:08:58.968 "data_offset": 0, 00:08:58.968 "data_size": 65536 00:08:58.968 }, 00:08:58.968 { 00:08:58.968 "name": "BaseBdev2", 00:08:58.968 "uuid": "cf40c4e4-4adf-408d-af39-483bd47ebf05", 00:08:58.968 "is_configured": true, 00:08:58.968 "data_offset": 0, 00:08:58.968 "data_size": 65536 00:08:58.968 }, 00:08:58.968 { 00:08:58.968 "name": "BaseBdev3", 00:08:58.968 "uuid": "a9997aa0-dfaa-44cf-a6a5-23dbc63cc754", 00:08:58.968 "is_configured": true, 00:08:58.968 "data_offset": 0, 00:08:58.968 "data_size": 65536 00:08:58.968 }, 00:08:58.968 { 00:08:58.968 "name": "BaseBdev4", 00:08:58.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.968 "is_configured": false, 00:08:58.968 "data_offset": 0, 00:08:58.968 "data_size": 0 00:08:58.968 } 00:08:58.968 ] 00:08:58.968 }' 00:08:58.968 23:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.968 23:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.236 [2024-11-02 23:48:53.292518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:59.236 [2024-11-02 23:48:53.292692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:59.236 [2024-11-02 23:48:53.292718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:59.236 [2024-11-02 23:48:53.293108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:59.236 [2024-11-02 23:48:53.293295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:59.236 [2024-11-02 23:48:53.293318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:59.236 [2024-11-02 23:48:53.293583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.236 BaseBdev4 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.236 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.236 [ 00:08:59.236 { 00:08:59.236 "name": "BaseBdev4", 00:08:59.236 "aliases": [ 00:08:59.236 "d086a138-27a0-4e8b-b21b-693b19a9da05" 00:08:59.236 ], 00:08:59.236 "product_name": "Malloc disk", 00:08:59.236 "block_size": 512, 00:08:59.236 "num_blocks": 65536, 00:08:59.236 "uuid": "d086a138-27a0-4e8b-b21b-693b19a9da05", 00:08:59.236 "assigned_rate_limits": { 00:08:59.236 "rw_ios_per_sec": 0, 00:08:59.236 "rw_mbytes_per_sec": 0, 00:08:59.236 "r_mbytes_per_sec": 0, 00:08:59.236 "w_mbytes_per_sec": 0 00:08:59.236 }, 00:08:59.236 "claimed": true, 00:08:59.236 "claim_type": "exclusive_write", 00:08:59.236 "zoned": false, 00:08:59.236 "supported_io_types": { 00:08:59.236 "read": true, 00:08:59.236 "write": true, 00:08:59.237 "unmap": true, 00:08:59.237 "flush": true, 00:08:59.237 "reset": true, 00:08:59.237 "nvme_admin": false, 00:08:59.237 "nvme_io": false, 00:08:59.237 "nvme_io_md": false, 00:08:59.237 "write_zeroes": true, 00:08:59.237 "zcopy": true, 00:08:59.237 "get_zone_info": false, 00:08:59.237 "zone_management": false, 00:08:59.237 "zone_append": false, 00:08:59.237 "compare": false, 00:08:59.237 "compare_and_write": false, 00:08:59.237 "abort": true, 00:08:59.237 "seek_hole": false, 00:08:59.237 "seek_data": false, 00:08:59.237 "copy": true, 00:08:59.237 "nvme_iov_md": false 00:08:59.237 }, 00:08:59.237 "memory_domains": [ 00:08:59.237 { 00:08:59.237 "dma_device_id": "system", 00:08:59.237 "dma_device_type": 1 00:08:59.237 }, 00:08:59.237 { 00:08:59.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.237 "dma_device_type": 2 00:08:59.237 } 00:08:59.237 ], 00:08:59.237 "driver_specific": {} 00:08:59.237 } 00:08:59.237 ] 00:08:59.496 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.497 "name": "Existed_Raid", 00:08:59.497 "uuid": "cbc3710d-4ab0-4d3b-ac6f-b3afb2a45c0c", 00:08:59.497 "strip_size_kb": 64, 00:08:59.497 "state": "online", 00:08:59.497 "raid_level": "raid0", 00:08:59.497 "superblock": false, 00:08:59.497 "num_base_bdevs": 4, 00:08:59.497 "num_base_bdevs_discovered": 4, 00:08:59.497 "num_base_bdevs_operational": 4, 00:08:59.497 "base_bdevs_list": [ 00:08:59.497 { 00:08:59.497 "name": "BaseBdev1", 00:08:59.497 "uuid": "990f92ce-4f98-4bc8-be32-817cf7365a9a", 00:08:59.497 "is_configured": true, 00:08:59.497 "data_offset": 0, 00:08:59.497 "data_size": 65536 00:08:59.497 }, 00:08:59.497 { 00:08:59.497 "name": "BaseBdev2", 00:08:59.497 "uuid": "cf40c4e4-4adf-408d-af39-483bd47ebf05", 00:08:59.497 "is_configured": true, 00:08:59.497 "data_offset": 0, 00:08:59.497 "data_size": 65536 00:08:59.497 }, 00:08:59.497 { 00:08:59.497 "name": "BaseBdev3", 00:08:59.497 "uuid": "a9997aa0-dfaa-44cf-a6a5-23dbc63cc754", 00:08:59.497 "is_configured": true, 00:08:59.497 "data_offset": 0, 00:08:59.497 "data_size": 65536 00:08:59.497 }, 00:08:59.497 { 00:08:59.497 "name": "BaseBdev4", 00:08:59.497 "uuid": "d086a138-27a0-4e8b-b21b-693b19a9da05", 00:08:59.497 "is_configured": true, 00:08:59.497 "data_offset": 0, 00:08:59.497 "data_size": 65536 00:08:59.497 } 00:08:59.497 ] 00:08:59.497 }' 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.497 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.757 [2024-11-02 23:48:53.792150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.757 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.757 "name": "Existed_Raid", 00:08:59.757 "aliases": [ 00:08:59.757 "cbc3710d-4ab0-4d3b-ac6f-b3afb2a45c0c" 00:08:59.757 ], 00:08:59.757 "product_name": "Raid Volume", 00:08:59.757 "block_size": 512, 00:08:59.757 "num_blocks": 262144, 00:08:59.757 "uuid": "cbc3710d-4ab0-4d3b-ac6f-b3afb2a45c0c", 00:08:59.757 "assigned_rate_limits": { 00:08:59.757 "rw_ios_per_sec": 0, 00:08:59.757 "rw_mbytes_per_sec": 0, 00:08:59.757 "r_mbytes_per_sec": 0, 00:08:59.757 "w_mbytes_per_sec": 0 00:08:59.757 }, 00:08:59.757 "claimed": false, 00:08:59.757 "zoned": false, 00:08:59.757 "supported_io_types": { 00:08:59.757 "read": true, 00:08:59.757 "write": true, 00:08:59.757 "unmap": true, 00:08:59.757 "flush": true, 00:08:59.757 "reset": true, 00:08:59.757 "nvme_admin": false, 00:08:59.757 "nvme_io": false, 00:08:59.757 "nvme_io_md": false, 00:08:59.757 "write_zeroes": true, 00:08:59.757 "zcopy": false, 00:08:59.757 "get_zone_info": false, 00:08:59.757 "zone_management": false, 00:08:59.757 "zone_append": false, 00:08:59.757 "compare": false, 00:08:59.757 "compare_and_write": false, 00:08:59.757 "abort": false, 00:08:59.757 "seek_hole": false, 00:08:59.757 "seek_data": false, 00:08:59.757 "copy": false, 00:08:59.757 "nvme_iov_md": false 00:08:59.757 }, 00:08:59.757 "memory_domains": [ 00:08:59.757 { 00:08:59.757 "dma_device_id": "system", 00:08:59.757 "dma_device_type": 1 00:08:59.757 }, 00:08:59.757 { 00:08:59.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.757 "dma_device_type": 2 00:08:59.757 }, 00:08:59.757 { 00:08:59.757 "dma_device_id": "system", 00:08:59.757 "dma_device_type": 1 00:08:59.757 }, 00:08:59.757 { 00:08:59.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.757 "dma_device_type": 2 00:08:59.757 }, 00:08:59.757 { 00:08:59.757 "dma_device_id": "system", 00:08:59.757 "dma_device_type": 1 00:08:59.757 }, 00:08:59.757 { 00:08:59.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.757 "dma_device_type": 2 00:08:59.757 }, 00:08:59.757 { 00:08:59.757 "dma_device_id": "system", 00:08:59.757 "dma_device_type": 1 00:08:59.757 }, 00:08:59.757 { 00:08:59.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.757 "dma_device_type": 2 00:08:59.757 } 00:08:59.757 ], 00:08:59.757 "driver_specific": { 00:08:59.757 "raid": { 00:08:59.757 "uuid": "cbc3710d-4ab0-4d3b-ac6f-b3afb2a45c0c", 00:08:59.757 "strip_size_kb": 64, 00:08:59.757 "state": "online", 00:08:59.757 "raid_level": "raid0", 00:08:59.757 "superblock": false, 00:08:59.757 "num_base_bdevs": 4, 00:08:59.757 "num_base_bdevs_discovered": 4, 00:08:59.757 "num_base_bdevs_operational": 4, 00:08:59.757 "base_bdevs_list": [ 00:08:59.757 { 00:08:59.757 "name": "BaseBdev1", 00:08:59.757 "uuid": "990f92ce-4f98-4bc8-be32-817cf7365a9a", 00:08:59.757 "is_configured": true, 00:08:59.757 "data_offset": 0, 00:08:59.757 "data_size": 65536 00:08:59.757 }, 00:08:59.757 { 00:08:59.757 "name": "BaseBdev2", 00:08:59.757 "uuid": "cf40c4e4-4adf-408d-af39-483bd47ebf05", 00:08:59.757 "is_configured": true, 00:08:59.757 "data_offset": 0, 00:08:59.757 "data_size": 65536 00:08:59.758 }, 00:08:59.758 { 00:08:59.758 "name": "BaseBdev3", 00:08:59.758 "uuid": "a9997aa0-dfaa-44cf-a6a5-23dbc63cc754", 00:08:59.758 "is_configured": true, 00:08:59.758 "data_offset": 0, 00:08:59.758 "data_size": 65536 00:08:59.758 }, 00:08:59.758 { 00:08:59.758 "name": "BaseBdev4", 00:08:59.758 "uuid": "d086a138-27a0-4e8b-b21b-693b19a9da05", 00:08:59.758 "is_configured": true, 00:08:59.758 "data_offset": 0, 00:08:59.758 "data_size": 65536 00:08:59.758 } 00:08:59.758 ] 00:08:59.758 } 00:08:59.758 } 00:08:59.758 }' 00:08:59.758 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:00.018 BaseBdev2 00:09:00.018 BaseBdev3 00:09:00.018 BaseBdev4' 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.018 23:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.018 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.018 [2024-11-02 23:48:54.095279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.018 [2024-11-02 23:48:54.095425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.018 [2024-11-02 23:48:54.095530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.278 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.278 "name": "Existed_Raid", 00:09:00.278 "uuid": "cbc3710d-4ab0-4d3b-ac6f-b3afb2a45c0c", 00:09:00.278 "strip_size_kb": 64, 00:09:00.278 "state": "offline", 00:09:00.278 "raid_level": "raid0", 00:09:00.278 "superblock": false, 00:09:00.278 "num_base_bdevs": 4, 00:09:00.278 "num_base_bdevs_discovered": 3, 00:09:00.278 "num_base_bdevs_operational": 3, 00:09:00.278 "base_bdevs_list": [ 00:09:00.278 { 00:09:00.278 "name": null, 00:09:00.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.278 "is_configured": false, 00:09:00.278 "data_offset": 0, 00:09:00.278 "data_size": 65536 00:09:00.278 }, 00:09:00.278 { 00:09:00.278 "name": "BaseBdev2", 00:09:00.278 "uuid": "cf40c4e4-4adf-408d-af39-483bd47ebf05", 00:09:00.278 "is_configured": true, 00:09:00.278 "data_offset": 0, 00:09:00.278 "data_size": 65536 00:09:00.278 }, 00:09:00.278 { 00:09:00.278 "name": "BaseBdev3", 00:09:00.278 "uuid": "a9997aa0-dfaa-44cf-a6a5-23dbc63cc754", 00:09:00.278 "is_configured": true, 00:09:00.278 "data_offset": 0, 00:09:00.278 "data_size": 65536 00:09:00.278 }, 00:09:00.278 { 00:09:00.278 "name": "BaseBdev4", 00:09:00.278 "uuid": "d086a138-27a0-4e8b-b21b-693b19a9da05", 00:09:00.278 "is_configured": true, 00:09:00.278 "data_offset": 0, 00:09:00.278 "data_size": 65536 00:09:00.278 } 00:09:00.279 ] 00:09:00.279 }' 00:09:00.279 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.279 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.542 [2024-11-02 23:48:54.603560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.542 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.808 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 [2024-11-02 23:48:54.684407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 [2024-11-02 23:48:54.761059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:00.809 [2024-11-02 23:48:54.761194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 BaseBdev2 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 [ 00:09:00.809 { 00:09:00.809 "name": "BaseBdev2", 00:09:00.809 "aliases": [ 00:09:00.809 "5196f85a-9826-4157-9bc0-8f73512e8d7e" 00:09:00.809 ], 00:09:00.809 "product_name": "Malloc disk", 00:09:00.809 "block_size": 512, 00:09:00.809 "num_blocks": 65536, 00:09:00.809 "uuid": "5196f85a-9826-4157-9bc0-8f73512e8d7e", 00:09:00.809 "assigned_rate_limits": { 00:09:00.809 "rw_ios_per_sec": 0, 00:09:00.809 "rw_mbytes_per_sec": 0, 00:09:00.809 "r_mbytes_per_sec": 0, 00:09:00.809 "w_mbytes_per_sec": 0 00:09:00.809 }, 00:09:00.809 "claimed": false, 00:09:00.809 "zoned": false, 00:09:00.809 "supported_io_types": { 00:09:00.809 "read": true, 00:09:00.809 "write": true, 00:09:00.809 "unmap": true, 00:09:00.809 "flush": true, 00:09:00.809 "reset": true, 00:09:00.809 "nvme_admin": false, 00:09:00.809 "nvme_io": false, 00:09:00.809 "nvme_io_md": false, 00:09:00.809 "write_zeroes": true, 00:09:00.809 "zcopy": true, 00:09:00.809 "get_zone_info": false, 00:09:00.809 "zone_management": false, 00:09:00.809 "zone_append": false, 00:09:00.809 "compare": false, 00:09:00.809 "compare_and_write": false, 00:09:00.809 "abort": true, 00:09:00.809 "seek_hole": false, 00:09:00.809 "seek_data": false, 00:09:00.809 "copy": true, 00:09:00.809 "nvme_iov_md": false 00:09:00.809 }, 00:09:00.809 "memory_domains": [ 00:09:00.809 { 00:09:00.809 "dma_device_id": "system", 00:09:00.809 "dma_device_type": 1 00:09:00.809 }, 00:09:00.809 { 00:09:00.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.809 "dma_device_type": 2 00:09:00.809 } 00:09:00.809 ], 00:09:00.809 "driver_specific": {} 00:09:00.809 } 00:09:00.809 ] 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.809 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.069 BaseBdev3 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.069 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.069 [ 00:09:01.069 { 00:09:01.069 "name": "BaseBdev3", 00:09:01.069 "aliases": [ 00:09:01.069 "a5b20aaf-7427-4c57-a008-2a88e6642ec4" 00:09:01.069 ], 00:09:01.069 "product_name": "Malloc disk", 00:09:01.069 "block_size": 512, 00:09:01.069 "num_blocks": 65536, 00:09:01.069 "uuid": "a5b20aaf-7427-4c57-a008-2a88e6642ec4", 00:09:01.069 "assigned_rate_limits": { 00:09:01.069 "rw_ios_per_sec": 0, 00:09:01.069 "rw_mbytes_per_sec": 0, 00:09:01.069 "r_mbytes_per_sec": 0, 00:09:01.069 "w_mbytes_per_sec": 0 00:09:01.069 }, 00:09:01.069 "claimed": false, 00:09:01.069 "zoned": false, 00:09:01.069 "supported_io_types": { 00:09:01.069 "read": true, 00:09:01.069 "write": true, 00:09:01.069 "unmap": true, 00:09:01.069 "flush": true, 00:09:01.069 "reset": true, 00:09:01.069 "nvme_admin": false, 00:09:01.069 "nvme_io": false, 00:09:01.069 "nvme_io_md": false, 00:09:01.069 "write_zeroes": true, 00:09:01.069 "zcopy": true, 00:09:01.070 "get_zone_info": false, 00:09:01.070 "zone_management": false, 00:09:01.070 "zone_append": false, 00:09:01.070 "compare": false, 00:09:01.070 "compare_and_write": false, 00:09:01.070 "abort": true, 00:09:01.070 "seek_hole": false, 00:09:01.070 "seek_data": false, 00:09:01.070 "copy": true, 00:09:01.070 "nvme_iov_md": false 00:09:01.070 }, 00:09:01.070 "memory_domains": [ 00:09:01.070 { 00:09:01.070 "dma_device_id": "system", 00:09:01.070 "dma_device_type": 1 00:09:01.070 }, 00:09:01.070 { 00:09:01.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.070 "dma_device_type": 2 00:09:01.070 } 00:09:01.070 ], 00:09:01.070 "driver_specific": {} 00:09:01.070 } 00:09:01.070 ] 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.070 BaseBdev4 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.070 23:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.070 [ 00:09:01.070 { 00:09:01.070 "name": "BaseBdev4", 00:09:01.070 "aliases": [ 00:09:01.070 "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc" 00:09:01.070 ], 00:09:01.070 "product_name": "Malloc disk", 00:09:01.070 "block_size": 512, 00:09:01.070 "num_blocks": 65536, 00:09:01.070 "uuid": "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc", 00:09:01.070 "assigned_rate_limits": { 00:09:01.070 "rw_ios_per_sec": 0, 00:09:01.070 "rw_mbytes_per_sec": 0, 00:09:01.070 "r_mbytes_per_sec": 0, 00:09:01.070 "w_mbytes_per_sec": 0 00:09:01.070 }, 00:09:01.070 "claimed": false, 00:09:01.070 "zoned": false, 00:09:01.070 "supported_io_types": { 00:09:01.070 "read": true, 00:09:01.070 "write": true, 00:09:01.070 "unmap": true, 00:09:01.070 "flush": true, 00:09:01.070 "reset": true, 00:09:01.070 "nvme_admin": false, 00:09:01.070 "nvme_io": false, 00:09:01.070 "nvme_io_md": false, 00:09:01.070 "write_zeroes": true, 00:09:01.070 "zcopy": true, 00:09:01.070 "get_zone_info": false, 00:09:01.070 "zone_management": false, 00:09:01.070 "zone_append": false, 00:09:01.070 "compare": false, 00:09:01.070 "compare_and_write": false, 00:09:01.070 "abort": true, 00:09:01.070 "seek_hole": false, 00:09:01.070 "seek_data": false, 00:09:01.070 "copy": true, 00:09:01.070 "nvme_iov_md": false 00:09:01.070 }, 00:09:01.070 "memory_domains": [ 00:09:01.070 { 00:09:01.070 "dma_device_id": "system", 00:09:01.070 "dma_device_type": 1 00:09:01.070 }, 00:09:01.070 { 00:09:01.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.070 "dma_device_type": 2 00:09:01.070 } 00:09:01.070 ], 00:09:01.070 "driver_specific": {} 00:09:01.070 } 00:09:01.070 ] 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.070 [2024-11-02 23:48:55.019607] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.070 [2024-11-02 23:48:55.019774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.070 [2024-11-02 23:48:55.019854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.070 [2024-11-02 23:48:55.022163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.070 [2024-11-02 23:48:55.022271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.070 "name": "Existed_Raid", 00:09:01.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.070 "strip_size_kb": 64, 00:09:01.070 "state": "configuring", 00:09:01.070 "raid_level": "raid0", 00:09:01.070 "superblock": false, 00:09:01.070 "num_base_bdevs": 4, 00:09:01.070 "num_base_bdevs_discovered": 3, 00:09:01.070 "num_base_bdevs_operational": 4, 00:09:01.070 "base_bdevs_list": [ 00:09:01.070 { 00:09:01.070 "name": "BaseBdev1", 00:09:01.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.070 "is_configured": false, 00:09:01.070 "data_offset": 0, 00:09:01.070 "data_size": 0 00:09:01.070 }, 00:09:01.070 { 00:09:01.070 "name": "BaseBdev2", 00:09:01.070 "uuid": "5196f85a-9826-4157-9bc0-8f73512e8d7e", 00:09:01.070 "is_configured": true, 00:09:01.070 "data_offset": 0, 00:09:01.070 "data_size": 65536 00:09:01.070 }, 00:09:01.070 { 00:09:01.070 "name": "BaseBdev3", 00:09:01.070 "uuid": "a5b20aaf-7427-4c57-a008-2a88e6642ec4", 00:09:01.070 "is_configured": true, 00:09:01.070 "data_offset": 0, 00:09:01.070 "data_size": 65536 00:09:01.070 }, 00:09:01.070 { 00:09:01.070 "name": "BaseBdev4", 00:09:01.070 "uuid": "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc", 00:09:01.070 "is_configured": true, 00:09:01.070 "data_offset": 0, 00:09:01.070 "data_size": 65536 00:09:01.070 } 00:09:01.070 ] 00:09:01.070 }' 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.070 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.330 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:01.330 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.330 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.588 [2024-11-02 23:48:55.422985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.588 "name": "Existed_Raid", 00:09:01.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.588 "strip_size_kb": 64, 00:09:01.588 "state": "configuring", 00:09:01.588 "raid_level": "raid0", 00:09:01.588 "superblock": false, 00:09:01.588 "num_base_bdevs": 4, 00:09:01.588 "num_base_bdevs_discovered": 2, 00:09:01.588 "num_base_bdevs_operational": 4, 00:09:01.588 "base_bdevs_list": [ 00:09:01.588 { 00:09:01.588 "name": "BaseBdev1", 00:09:01.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.588 "is_configured": false, 00:09:01.588 "data_offset": 0, 00:09:01.588 "data_size": 0 00:09:01.588 }, 00:09:01.588 { 00:09:01.588 "name": null, 00:09:01.588 "uuid": "5196f85a-9826-4157-9bc0-8f73512e8d7e", 00:09:01.588 "is_configured": false, 00:09:01.588 "data_offset": 0, 00:09:01.588 "data_size": 65536 00:09:01.588 }, 00:09:01.588 { 00:09:01.588 "name": "BaseBdev3", 00:09:01.588 "uuid": "a5b20aaf-7427-4c57-a008-2a88e6642ec4", 00:09:01.588 "is_configured": true, 00:09:01.588 "data_offset": 0, 00:09:01.588 "data_size": 65536 00:09:01.588 }, 00:09:01.588 { 00:09:01.588 "name": "BaseBdev4", 00:09:01.588 "uuid": "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc", 00:09:01.588 "is_configured": true, 00:09:01.588 "data_offset": 0, 00:09:01.588 "data_size": 65536 00:09:01.588 } 00:09:01.588 ] 00:09:01.588 }' 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.588 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.847 [2024-11-02 23:48:55.927515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.847 BaseBdev1 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.847 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.848 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.107 [ 00:09:02.107 { 00:09:02.107 "name": "BaseBdev1", 00:09:02.107 "aliases": [ 00:09:02.107 "b8a54e5b-d51e-4918-9e5d-df472310d58b" 00:09:02.107 ], 00:09:02.107 "product_name": "Malloc disk", 00:09:02.107 "block_size": 512, 00:09:02.107 "num_blocks": 65536, 00:09:02.107 "uuid": "b8a54e5b-d51e-4918-9e5d-df472310d58b", 00:09:02.107 "assigned_rate_limits": { 00:09:02.107 "rw_ios_per_sec": 0, 00:09:02.107 "rw_mbytes_per_sec": 0, 00:09:02.107 "r_mbytes_per_sec": 0, 00:09:02.107 "w_mbytes_per_sec": 0 00:09:02.107 }, 00:09:02.107 "claimed": true, 00:09:02.107 "claim_type": "exclusive_write", 00:09:02.107 "zoned": false, 00:09:02.107 "supported_io_types": { 00:09:02.107 "read": true, 00:09:02.107 "write": true, 00:09:02.107 "unmap": true, 00:09:02.107 "flush": true, 00:09:02.107 "reset": true, 00:09:02.107 "nvme_admin": false, 00:09:02.107 "nvme_io": false, 00:09:02.107 "nvme_io_md": false, 00:09:02.107 "write_zeroes": true, 00:09:02.107 "zcopy": true, 00:09:02.107 "get_zone_info": false, 00:09:02.107 "zone_management": false, 00:09:02.107 "zone_append": false, 00:09:02.107 "compare": false, 00:09:02.107 "compare_and_write": false, 00:09:02.107 "abort": true, 00:09:02.107 "seek_hole": false, 00:09:02.107 "seek_data": false, 00:09:02.107 "copy": true, 00:09:02.107 "nvme_iov_md": false 00:09:02.107 }, 00:09:02.107 "memory_domains": [ 00:09:02.107 { 00:09:02.107 "dma_device_id": "system", 00:09:02.107 "dma_device_type": 1 00:09:02.107 }, 00:09:02.107 { 00:09:02.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.107 "dma_device_type": 2 00:09:02.107 } 00:09:02.107 ], 00:09:02.107 "driver_specific": {} 00:09:02.107 } 00:09:02.107 ] 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.107 23:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.108 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.108 "name": "Existed_Raid", 00:09:02.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.108 "strip_size_kb": 64, 00:09:02.108 "state": "configuring", 00:09:02.108 "raid_level": "raid0", 00:09:02.108 "superblock": false, 00:09:02.108 "num_base_bdevs": 4, 00:09:02.108 "num_base_bdevs_discovered": 3, 00:09:02.108 "num_base_bdevs_operational": 4, 00:09:02.108 "base_bdevs_list": [ 00:09:02.108 { 00:09:02.108 "name": "BaseBdev1", 00:09:02.108 "uuid": "b8a54e5b-d51e-4918-9e5d-df472310d58b", 00:09:02.108 "is_configured": true, 00:09:02.108 "data_offset": 0, 00:09:02.108 "data_size": 65536 00:09:02.108 }, 00:09:02.108 { 00:09:02.108 "name": null, 00:09:02.108 "uuid": "5196f85a-9826-4157-9bc0-8f73512e8d7e", 00:09:02.108 "is_configured": false, 00:09:02.108 "data_offset": 0, 00:09:02.108 "data_size": 65536 00:09:02.108 }, 00:09:02.108 { 00:09:02.108 "name": "BaseBdev3", 00:09:02.108 "uuid": "a5b20aaf-7427-4c57-a008-2a88e6642ec4", 00:09:02.108 "is_configured": true, 00:09:02.108 "data_offset": 0, 00:09:02.108 "data_size": 65536 00:09:02.108 }, 00:09:02.108 { 00:09:02.108 "name": "BaseBdev4", 00:09:02.108 "uuid": "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc", 00:09:02.108 "is_configured": true, 00:09:02.108 "data_offset": 0, 00:09:02.108 "data_size": 65536 00:09:02.108 } 00:09:02.108 ] 00:09:02.108 }' 00:09:02.108 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.108 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.367 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.367 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.367 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.367 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.626 [2024-11-02 23:48:56.494792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.626 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.626 "name": "Existed_Raid", 00:09:02.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.626 "strip_size_kb": 64, 00:09:02.626 "state": "configuring", 00:09:02.626 "raid_level": "raid0", 00:09:02.626 "superblock": false, 00:09:02.626 "num_base_bdevs": 4, 00:09:02.626 "num_base_bdevs_discovered": 2, 00:09:02.626 "num_base_bdevs_operational": 4, 00:09:02.626 "base_bdevs_list": [ 00:09:02.626 { 00:09:02.626 "name": "BaseBdev1", 00:09:02.626 "uuid": "b8a54e5b-d51e-4918-9e5d-df472310d58b", 00:09:02.626 "is_configured": true, 00:09:02.626 "data_offset": 0, 00:09:02.626 "data_size": 65536 00:09:02.626 }, 00:09:02.626 { 00:09:02.626 "name": null, 00:09:02.626 "uuid": "5196f85a-9826-4157-9bc0-8f73512e8d7e", 00:09:02.626 "is_configured": false, 00:09:02.626 "data_offset": 0, 00:09:02.626 "data_size": 65536 00:09:02.626 }, 00:09:02.626 { 00:09:02.626 "name": null, 00:09:02.626 "uuid": "a5b20aaf-7427-4c57-a008-2a88e6642ec4", 00:09:02.626 "is_configured": false, 00:09:02.626 "data_offset": 0, 00:09:02.626 "data_size": 65536 00:09:02.626 }, 00:09:02.626 { 00:09:02.626 "name": "BaseBdev4", 00:09:02.626 "uuid": "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc", 00:09:02.626 "is_configured": true, 00:09:02.627 "data_offset": 0, 00:09:02.627 "data_size": 65536 00:09:02.627 } 00:09:02.627 ] 00:09:02.627 }' 00:09:02.627 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.627 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.890 [2024-11-02 23:48:56.953982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.890 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.151 23:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.151 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.151 "name": "Existed_Raid", 00:09:03.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.151 "strip_size_kb": 64, 00:09:03.151 "state": "configuring", 00:09:03.151 "raid_level": "raid0", 00:09:03.151 "superblock": false, 00:09:03.151 "num_base_bdevs": 4, 00:09:03.151 "num_base_bdevs_discovered": 3, 00:09:03.151 "num_base_bdevs_operational": 4, 00:09:03.151 "base_bdevs_list": [ 00:09:03.151 { 00:09:03.151 "name": "BaseBdev1", 00:09:03.151 "uuid": "b8a54e5b-d51e-4918-9e5d-df472310d58b", 00:09:03.151 "is_configured": true, 00:09:03.151 "data_offset": 0, 00:09:03.151 "data_size": 65536 00:09:03.151 }, 00:09:03.151 { 00:09:03.151 "name": null, 00:09:03.151 "uuid": "5196f85a-9826-4157-9bc0-8f73512e8d7e", 00:09:03.151 "is_configured": false, 00:09:03.151 "data_offset": 0, 00:09:03.151 "data_size": 65536 00:09:03.151 }, 00:09:03.151 { 00:09:03.151 "name": "BaseBdev3", 00:09:03.151 "uuid": "a5b20aaf-7427-4c57-a008-2a88e6642ec4", 00:09:03.151 "is_configured": true, 00:09:03.151 "data_offset": 0, 00:09:03.151 "data_size": 65536 00:09:03.151 }, 00:09:03.151 { 00:09:03.151 "name": "BaseBdev4", 00:09:03.151 "uuid": "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc", 00:09:03.151 "is_configured": true, 00:09:03.151 "data_offset": 0, 00:09:03.151 "data_size": 65536 00:09:03.151 } 00:09:03.151 ] 00:09:03.151 }' 00:09:03.151 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.151 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.411 [2024-11-02 23:48:57.397268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.411 "name": "Existed_Raid", 00:09:03.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.411 "strip_size_kb": 64, 00:09:03.411 "state": "configuring", 00:09:03.411 "raid_level": "raid0", 00:09:03.411 "superblock": false, 00:09:03.411 "num_base_bdevs": 4, 00:09:03.411 "num_base_bdevs_discovered": 2, 00:09:03.411 "num_base_bdevs_operational": 4, 00:09:03.411 "base_bdevs_list": [ 00:09:03.411 { 00:09:03.411 "name": null, 00:09:03.411 "uuid": "b8a54e5b-d51e-4918-9e5d-df472310d58b", 00:09:03.411 "is_configured": false, 00:09:03.411 "data_offset": 0, 00:09:03.411 "data_size": 65536 00:09:03.411 }, 00:09:03.411 { 00:09:03.411 "name": null, 00:09:03.411 "uuid": "5196f85a-9826-4157-9bc0-8f73512e8d7e", 00:09:03.411 "is_configured": false, 00:09:03.411 "data_offset": 0, 00:09:03.411 "data_size": 65536 00:09:03.411 }, 00:09:03.411 { 00:09:03.411 "name": "BaseBdev3", 00:09:03.411 "uuid": "a5b20aaf-7427-4c57-a008-2a88e6642ec4", 00:09:03.411 "is_configured": true, 00:09:03.411 "data_offset": 0, 00:09:03.411 "data_size": 65536 00:09:03.411 }, 00:09:03.411 { 00:09:03.411 "name": "BaseBdev4", 00:09:03.411 "uuid": "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc", 00:09:03.411 "is_configured": true, 00:09:03.411 "data_offset": 0, 00:09:03.411 "data_size": 65536 00:09:03.411 } 00:09:03.411 ] 00:09:03.411 }' 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.411 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.980 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.980 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.980 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 [2024-11-02 23:48:57.948806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.981 23:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.981 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.981 "name": "Existed_Raid", 00:09:03.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.981 "strip_size_kb": 64, 00:09:03.981 "state": "configuring", 00:09:03.981 "raid_level": "raid0", 00:09:03.981 "superblock": false, 00:09:03.981 "num_base_bdevs": 4, 00:09:03.981 "num_base_bdevs_discovered": 3, 00:09:03.981 "num_base_bdevs_operational": 4, 00:09:03.981 "base_bdevs_list": [ 00:09:03.981 { 00:09:03.981 "name": null, 00:09:03.981 "uuid": "b8a54e5b-d51e-4918-9e5d-df472310d58b", 00:09:03.981 "is_configured": false, 00:09:03.981 "data_offset": 0, 00:09:03.981 "data_size": 65536 00:09:03.981 }, 00:09:03.981 { 00:09:03.981 "name": "BaseBdev2", 00:09:03.981 "uuid": "5196f85a-9826-4157-9bc0-8f73512e8d7e", 00:09:03.981 "is_configured": true, 00:09:03.981 "data_offset": 0, 00:09:03.981 "data_size": 65536 00:09:03.981 }, 00:09:03.981 { 00:09:03.981 "name": "BaseBdev3", 00:09:03.981 "uuid": "a5b20aaf-7427-4c57-a008-2a88e6642ec4", 00:09:03.981 "is_configured": true, 00:09:03.981 "data_offset": 0, 00:09:03.981 "data_size": 65536 00:09:03.981 }, 00:09:03.981 { 00:09:03.981 "name": "BaseBdev4", 00:09:03.981 "uuid": "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc", 00:09:03.981 "is_configured": true, 00:09:03.981 "data_offset": 0, 00:09:03.981 "data_size": 65536 00:09:03.981 } 00:09:03.981 ] 00:09:03.981 }' 00:09:03.981 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.981 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b8a54e5b-d51e-4918-9e5d-df472310d58b 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.564 [2024-11-02 23:48:58.521009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:04.564 [2024-11-02 23:48:58.521163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:04.564 [2024-11-02 23:48:58.521194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:04.564 [2024-11-02 23:48:58.521570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:04.564 [2024-11-02 23:48:58.521793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:04.564 [2024-11-02 23:48:58.521848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:04.564 [2024-11-02 23:48:58.522123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.564 NewBaseBdev 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.564 [ 00:09:04.564 { 00:09:04.564 "name": "NewBaseBdev", 00:09:04.564 "aliases": [ 00:09:04.564 "b8a54e5b-d51e-4918-9e5d-df472310d58b" 00:09:04.564 ], 00:09:04.564 "product_name": "Malloc disk", 00:09:04.564 "block_size": 512, 00:09:04.564 "num_blocks": 65536, 00:09:04.564 "uuid": "b8a54e5b-d51e-4918-9e5d-df472310d58b", 00:09:04.564 "assigned_rate_limits": { 00:09:04.564 "rw_ios_per_sec": 0, 00:09:04.564 "rw_mbytes_per_sec": 0, 00:09:04.564 "r_mbytes_per_sec": 0, 00:09:04.564 "w_mbytes_per_sec": 0 00:09:04.564 }, 00:09:04.564 "claimed": true, 00:09:04.564 "claim_type": "exclusive_write", 00:09:04.564 "zoned": false, 00:09:04.564 "supported_io_types": { 00:09:04.564 "read": true, 00:09:04.564 "write": true, 00:09:04.564 "unmap": true, 00:09:04.564 "flush": true, 00:09:04.564 "reset": true, 00:09:04.564 "nvme_admin": false, 00:09:04.564 "nvme_io": false, 00:09:04.564 "nvme_io_md": false, 00:09:04.564 "write_zeroes": true, 00:09:04.564 "zcopy": true, 00:09:04.564 "get_zone_info": false, 00:09:04.564 "zone_management": false, 00:09:04.564 "zone_append": false, 00:09:04.564 "compare": false, 00:09:04.564 "compare_and_write": false, 00:09:04.564 "abort": true, 00:09:04.564 "seek_hole": false, 00:09:04.564 "seek_data": false, 00:09:04.564 "copy": true, 00:09:04.564 "nvme_iov_md": false 00:09:04.564 }, 00:09:04.564 "memory_domains": [ 00:09:04.564 { 00:09:04.564 "dma_device_id": "system", 00:09:04.564 "dma_device_type": 1 00:09:04.564 }, 00:09:04.564 { 00:09:04.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.564 "dma_device_type": 2 00:09:04.564 } 00:09:04.564 ], 00:09:04.564 "driver_specific": {} 00:09:04.564 } 00:09:04.564 ] 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.564 "name": "Existed_Raid", 00:09:04.564 "uuid": "a465b405-3e0b-4d46-8fb0-bc8b5ff2a170", 00:09:04.564 "strip_size_kb": 64, 00:09:04.564 "state": "online", 00:09:04.564 "raid_level": "raid0", 00:09:04.564 "superblock": false, 00:09:04.564 "num_base_bdevs": 4, 00:09:04.564 "num_base_bdevs_discovered": 4, 00:09:04.564 "num_base_bdevs_operational": 4, 00:09:04.564 "base_bdevs_list": [ 00:09:04.564 { 00:09:04.564 "name": "NewBaseBdev", 00:09:04.564 "uuid": "b8a54e5b-d51e-4918-9e5d-df472310d58b", 00:09:04.564 "is_configured": true, 00:09:04.564 "data_offset": 0, 00:09:04.564 "data_size": 65536 00:09:04.564 }, 00:09:04.564 { 00:09:04.564 "name": "BaseBdev2", 00:09:04.564 "uuid": "5196f85a-9826-4157-9bc0-8f73512e8d7e", 00:09:04.564 "is_configured": true, 00:09:04.564 "data_offset": 0, 00:09:04.564 "data_size": 65536 00:09:04.564 }, 00:09:04.564 { 00:09:04.564 "name": "BaseBdev3", 00:09:04.564 "uuid": "a5b20aaf-7427-4c57-a008-2a88e6642ec4", 00:09:04.564 "is_configured": true, 00:09:04.564 "data_offset": 0, 00:09:04.564 "data_size": 65536 00:09:04.564 }, 00:09:04.564 { 00:09:04.564 "name": "BaseBdev4", 00:09:04.564 "uuid": "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc", 00:09:04.564 "is_configured": true, 00:09:04.564 "data_offset": 0, 00:09:04.564 "data_size": 65536 00:09:04.564 } 00:09:04.564 ] 00:09:04.564 }' 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.564 23:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.133 [2024-11-02 23:48:59.044596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.133 "name": "Existed_Raid", 00:09:05.133 "aliases": [ 00:09:05.133 "a465b405-3e0b-4d46-8fb0-bc8b5ff2a170" 00:09:05.133 ], 00:09:05.133 "product_name": "Raid Volume", 00:09:05.133 "block_size": 512, 00:09:05.133 "num_blocks": 262144, 00:09:05.133 "uuid": "a465b405-3e0b-4d46-8fb0-bc8b5ff2a170", 00:09:05.133 "assigned_rate_limits": { 00:09:05.133 "rw_ios_per_sec": 0, 00:09:05.133 "rw_mbytes_per_sec": 0, 00:09:05.133 "r_mbytes_per_sec": 0, 00:09:05.133 "w_mbytes_per_sec": 0 00:09:05.133 }, 00:09:05.133 "claimed": false, 00:09:05.133 "zoned": false, 00:09:05.133 "supported_io_types": { 00:09:05.133 "read": true, 00:09:05.133 "write": true, 00:09:05.133 "unmap": true, 00:09:05.133 "flush": true, 00:09:05.133 "reset": true, 00:09:05.133 "nvme_admin": false, 00:09:05.133 "nvme_io": false, 00:09:05.133 "nvme_io_md": false, 00:09:05.133 "write_zeroes": true, 00:09:05.133 "zcopy": false, 00:09:05.133 "get_zone_info": false, 00:09:05.133 "zone_management": false, 00:09:05.133 "zone_append": false, 00:09:05.133 "compare": false, 00:09:05.133 "compare_and_write": false, 00:09:05.133 "abort": false, 00:09:05.133 "seek_hole": false, 00:09:05.133 "seek_data": false, 00:09:05.133 "copy": false, 00:09:05.133 "nvme_iov_md": false 00:09:05.133 }, 00:09:05.133 "memory_domains": [ 00:09:05.133 { 00:09:05.133 "dma_device_id": "system", 00:09:05.133 "dma_device_type": 1 00:09:05.133 }, 00:09:05.133 { 00:09:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.133 "dma_device_type": 2 00:09:05.133 }, 00:09:05.133 { 00:09:05.133 "dma_device_id": "system", 00:09:05.133 "dma_device_type": 1 00:09:05.133 }, 00:09:05.133 { 00:09:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.133 "dma_device_type": 2 00:09:05.133 }, 00:09:05.133 { 00:09:05.133 "dma_device_id": "system", 00:09:05.133 "dma_device_type": 1 00:09:05.133 }, 00:09:05.133 { 00:09:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.133 "dma_device_type": 2 00:09:05.133 }, 00:09:05.133 { 00:09:05.133 "dma_device_id": "system", 00:09:05.133 "dma_device_type": 1 00:09:05.133 }, 00:09:05.133 { 00:09:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.133 "dma_device_type": 2 00:09:05.133 } 00:09:05.133 ], 00:09:05.133 "driver_specific": { 00:09:05.133 "raid": { 00:09:05.133 "uuid": "a465b405-3e0b-4d46-8fb0-bc8b5ff2a170", 00:09:05.133 "strip_size_kb": 64, 00:09:05.133 "state": "online", 00:09:05.133 "raid_level": "raid0", 00:09:05.133 "superblock": false, 00:09:05.133 "num_base_bdevs": 4, 00:09:05.133 "num_base_bdevs_discovered": 4, 00:09:05.133 "num_base_bdevs_operational": 4, 00:09:05.133 "base_bdevs_list": [ 00:09:05.133 { 00:09:05.133 "name": "NewBaseBdev", 00:09:05.133 "uuid": "b8a54e5b-d51e-4918-9e5d-df472310d58b", 00:09:05.133 "is_configured": true, 00:09:05.133 "data_offset": 0, 00:09:05.133 "data_size": 65536 00:09:05.133 }, 00:09:05.133 { 00:09:05.133 "name": "BaseBdev2", 00:09:05.133 "uuid": "5196f85a-9826-4157-9bc0-8f73512e8d7e", 00:09:05.133 "is_configured": true, 00:09:05.133 "data_offset": 0, 00:09:05.133 "data_size": 65536 00:09:05.133 }, 00:09:05.133 { 00:09:05.133 "name": "BaseBdev3", 00:09:05.133 "uuid": "a5b20aaf-7427-4c57-a008-2a88e6642ec4", 00:09:05.133 "is_configured": true, 00:09:05.133 "data_offset": 0, 00:09:05.133 "data_size": 65536 00:09:05.133 }, 00:09:05.133 { 00:09:05.133 "name": "BaseBdev4", 00:09:05.133 "uuid": "6ba4f8ac-60c0-4e54-b9b0-8084b3c21abc", 00:09:05.133 "is_configured": true, 00:09:05.133 "data_offset": 0, 00:09:05.133 "data_size": 65536 00:09:05.133 } 00:09:05.133 ] 00:09:05.133 } 00:09:05.133 } 00:09:05.133 }' 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:05.133 BaseBdev2 00:09:05.133 BaseBdev3 00:09:05.133 BaseBdev4' 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.133 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.134 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.393 [2024-11-02 23:48:59.339776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.393 [2024-11-02 23:48:59.339821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.393 [2024-11-02 23:48:59.339929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.393 [2024-11-02 23:48:59.340016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.393 [2024-11-02 23:48:59.340037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80172 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80172 ']' 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80172 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80172 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80172' 00:09:05.393 killing process with pid 80172 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 80172 00:09:05.393 [2024-11-02 23:48:59.387167] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.393 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 80172 00:09:05.393 [2024-11-02 23:48:59.464453] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:05.983 00:09:05.983 real 0m9.863s 00:09:05.983 user 0m16.532s 00:09:05.983 sys 0m2.192s 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.983 ************************************ 00:09:05.983 END TEST raid_state_function_test 00:09:05.983 ************************************ 00:09:05.983 23:48:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:05.983 23:48:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:05.983 23:48:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.983 23:48:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.983 ************************************ 00:09:05.983 START TEST raid_state_function_test_sb 00:09:05.983 ************************************ 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:05.983 Process raid pid: 80822 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80822 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80822' 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80822 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80822 ']' 00:09:05.983 23:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.984 23:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:05.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.984 23:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.984 23:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:05.984 23:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.984 [2024-11-02 23:48:59.967525] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:05.984 [2024-11-02 23:48:59.967660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.252 [2024-11-02 23:49:00.125185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.252 [2024-11-02 23:49:00.169773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.252 [2024-11-02 23:49:00.246949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.252 [2024-11-02 23:49:00.247093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.838 [2024-11-02 23:49:00.811173] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.838 [2024-11-02 23:49:00.811384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.838 [2024-11-02 23:49:00.811402] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.838 [2024-11-02 23:49:00.811416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.838 [2024-11-02 23:49:00.811424] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.838 [2024-11-02 23:49:00.811439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.838 [2024-11-02 23:49:00.811447] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:06.838 [2024-11-02 23:49:00.811459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.838 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.838 "name": "Existed_Raid", 00:09:06.838 "uuid": "e6e56890-bff3-402a-ad7b-7701b126d69f", 00:09:06.838 "strip_size_kb": 64, 00:09:06.838 "state": "configuring", 00:09:06.838 "raid_level": "raid0", 00:09:06.838 "superblock": true, 00:09:06.839 "num_base_bdevs": 4, 00:09:06.839 "num_base_bdevs_discovered": 0, 00:09:06.839 "num_base_bdevs_operational": 4, 00:09:06.839 "base_bdevs_list": [ 00:09:06.839 { 00:09:06.839 "name": "BaseBdev1", 00:09:06.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.839 "is_configured": false, 00:09:06.839 "data_offset": 0, 00:09:06.839 "data_size": 0 00:09:06.839 }, 00:09:06.839 { 00:09:06.839 "name": "BaseBdev2", 00:09:06.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.839 "is_configured": false, 00:09:06.839 "data_offset": 0, 00:09:06.839 "data_size": 0 00:09:06.839 }, 00:09:06.839 { 00:09:06.839 "name": "BaseBdev3", 00:09:06.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.839 "is_configured": false, 00:09:06.839 "data_offset": 0, 00:09:06.839 "data_size": 0 00:09:06.839 }, 00:09:06.839 { 00:09:06.839 "name": "BaseBdev4", 00:09:06.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.839 "is_configured": false, 00:09:06.839 "data_offset": 0, 00:09:06.839 "data_size": 0 00:09:06.839 } 00:09:06.839 ] 00:09:06.839 }' 00:09:06.839 23:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.839 23:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.406 [2024-11-02 23:49:01.274390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.406 [2024-11-02 23:49:01.274551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.406 [2024-11-02 23:49:01.282378] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.406 [2024-11-02 23:49:01.282482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.406 [2024-11-02 23:49:01.282516] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.406 [2024-11-02 23:49:01.282546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.406 [2024-11-02 23:49:01.282568] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.406 [2024-11-02 23:49:01.282595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.406 [2024-11-02 23:49:01.282653] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:07.406 [2024-11-02 23:49:01.282684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.406 [2024-11-02 23:49:01.305984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.406 BaseBdev1 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.406 [ 00:09:07.406 { 00:09:07.406 "name": "BaseBdev1", 00:09:07.406 "aliases": [ 00:09:07.406 "a6d3ecea-6e06-4ef6-b09d-e18c767eed6b" 00:09:07.406 ], 00:09:07.406 "product_name": "Malloc disk", 00:09:07.406 "block_size": 512, 00:09:07.406 "num_blocks": 65536, 00:09:07.406 "uuid": "a6d3ecea-6e06-4ef6-b09d-e18c767eed6b", 00:09:07.406 "assigned_rate_limits": { 00:09:07.406 "rw_ios_per_sec": 0, 00:09:07.406 "rw_mbytes_per_sec": 0, 00:09:07.406 "r_mbytes_per_sec": 0, 00:09:07.406 "w_mbytes_per_sec": 0 00:09:07.406 }, 00:09:07.406 "claimed": true, 00:09:07.406 "claim_type": "exclusive_write", 00:09:07.406 "zoned": false, 00:09:07.406 "supported_io_types": { 00:09:07.406 "read": true, 00:09:07.406 "write": true, 00:09:07.406 "unmap": true, 00:09:07.406 "flush": true, 00:09:07.406 "reset": true, 00:09:07.406 "nvme_admin": false, 00:09:07.406 "nvme_io": false, 00:09:07.406 "nvme_io_md": false, 00:09:07.406 "write_zeroes": true, 00:09:07.406 "zcopy": true, 00:09:07.406 "get_zone_info": false, 00:09:07.406 "zone_management": false, 00:09:07.406 "zone_append": false, 00:09:07.406 "compare": false, 00:09:07.406 "compare_and_write": false, 00:09:07.406 "abort": true, 00:09:07.406 "seek_hole": false, 00:09:07.406 "seek_data": false, 00:09:07.406 "copy": true, 00:09:07.406 "nvme_iov_md": false 00:09:07.406 }, 00:09:07.406 "memory_domains": [ 00:09:07.406 { 00:09:07.406 "dma_device_id": "system", 00:09:07.406 "dma_device_type": 1 00:09:07.406 }, 00:09:07.406 { 00:09:07.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.406 "dma_device_type": 2 00:09:07.406 } 00:09:07.406 ], 00:09:07.406 "driver_specific": {} 00:09:07.406 } 00:09:07.406 ] 00:09:07.406 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.407 "name": "Existed_Raid", 00:09:07.407 "uuid": "2bfc9d72-0926-49c0-8823-5a45ecce9b38", 00:09:07.407 "strip_size_kb": 64, 00:09:07.407 "state": "configuring", 00:09:07.407 "raid_level": "raid0", 00:09:07.407 "superblock": true, 00:09:07.407 "num_base_bdevs": 4, 00:09:07.407 "num_base_bdevs_discovered": 1, 00:09:07.407 "num_base_bdevs_operational": 4, 00:09:07.407 "base_bdevs_list": [ 00:09:07.407 { 00:09:07.407 "name": "BaseBdev1", 00:09:07.407 "uuid": "a6d3ecea-6e06-4ef6-b09d-e18c767eed6b", 00:09:07.407 "is_configured": true, 00:09:07.407 "data_offset": 2048, 00:09:07.407 "data_size": 63488 00:09:07.407 }, 00:09:07.407 { 00:09:07.407 "name": "BaseBdev2", 00:09:07.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.407 "is_configured": false, 00:09:07.407 "data_offset": 0, 00:09:07.407 "data_size": 0 00:09:07.407 }, 00:09:07.407 { 00:09:07.407 "name": "BaseBdev3", 00:09:07.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.407 "is_configured": false, 00:09:07.407 "data_offset": 0, 00:09:07.407 "data_size": 0 00:09:07.407 }, 00:09:07.407 { 00:09:07.407 "name": "BaseBdev4", 00:09:07.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.407 "is_configured": false, 00:09:07.407 "data_offset": 0, 00:09:07.407 "data_size": 0 00:09:07.407 } 00:09:07.407 ] 00:09:07.407 }' 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.407 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.665 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.665 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.666 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.666 [2024-11-02 23:49:01.753327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.666 [2024-11-02 23:49:01.753493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:07.666 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.925 [2024-11-02 23:49:01.765351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.925 [2024-11-02 23:49:01.767616] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.925 [2024-11-02 23:49:01.767668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.925 [2024-11-02 23:49:01.767679] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.925 [2024-11-02 23:49:01.767690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.925 [2024-11-02 23:49:01.767698] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:07.925 [2024-11-02 23:49:01.767708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.925 "name": "Existed_Raid", 00:09:07.925 "uuid": "c627b816-455a-4af0-8a1a-17e39ded27f0", 00:09:07.925 "strip_size_kb": 64, 00:09:07.925 "state": "configuring", 00:09:07.925 "raid_level": "raid0", 00:09:07.925 "superblock": true, 00:09:07.925 "num_base_bdevs": 4, 00:09:07.925 "num_base_bdevs_discovered": 1, 00:09:07.925 "num_base_bdevs_operational": 4, 00:09:07.925 "base_bdevs_list": [ 00:09:07.925 { 00:09:07.925 "name": "BaseBdev1", 00:09:07.925 "uuid": "a6d3ecea-6e06-4ef6-b09d-e18c767eed6b", 00:09:07.925 "is_configured": true, 00:09:07.925 "data_offset": 2048, 00:09:07.925 "data_size": 63488 00:09:07.925 }, 00:09:07.925 { 00:09:07.925 "name": "BaseBdev2", 00:09:07.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.925 "is_configured": false, 00:09:07.925 "data_offset": 0, 00:09:07.925 "data_size": 0 00:09:07.925 }, 00:09:07.925 { 00:09:07.925 "name": "BaseBdev3", 00:09:07.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.925 "is_configured": false, 00:09:07.925 "data_offset": 0, 00:09:07.925 "data_size": 0 00:09:07.925 }, 00:09:07.925 { 00:09:07.925 "name": "BaseBdev4", 00:09:07.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.925 "is_configured": false, 00:09:07.925 "data_offset": 0, 00:09:07.925 "data_size": 0 00:09:07.925 } 00:09:07.925 ] 00:09:07.925 }' 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.925 23:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.185 [2024-11-02 23:49:02.257694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.185 BaseBdev2 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.185 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.186 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.186 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.186 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.444 [ 00:09:08.444 { 00:09:08.444 "name": "BaseBdev2", 00:09:08.444 "aliases": [ 00:09:08.444 "b49c54f8-80dd-4511-9910-a836b315a91e" 00:09:08.444 ], 00:09:08.444 "product_name": "Malloc disk", 00:09:08.444 "block_size": 512, 00:09:08.444 "num_blocks": 65536, 00:09:08.444 "uuid": "b49c54f8-80dd-4511-9910-a836b315a91e", 00:09:08.444 "assigned_rate_limits": { 00:09:08.444 "rw_ios_per_sec": 0, 00:09:08.444 "rw_mbytes_per_sec": 0, 00:09:08.444 "r_mbytes_per_sec": 0, 00:09:08.444 "w_mbytes_per_sec": 0 00:09:08.444 }, 00:09:08.444 "claimed": true, 00:09:08.444 "claim_type": "exclusive_write", 00:09:08.444 "zoned": false, 00:09:08.444 "supported_io_types": { 00:09:08.444 "read": true, 00:09:08.444 "write": true, 00:09:08.444 "unmap": true, 00:09:08.444 "flush": true, 00:09:08.444 "reset": true, 00:09:08.444 "nvme_admin": false, 00:09:08.444 "nvme_io": false, 00:09:08.444 "nvme_io_md": false, 00:09:08.444 "write_zeroes": true, 00:09:08.444 "zcopy": true, 00:09:08.444 "get_zone_info": false, 00:09:08.444 "zone_management": false, 00:09:08.444 "zone_append": false, 00:09:08.444 "compare": false, 00:09:08.444 "compare_and_write": false, 00:09:08.444 "abort": true, 00:09:08.444 "seek_hole": false, 00:09:08.444 "seek_data": false, 00:09:08.444 "copy": true, 00:09:08.444 "nvme_iov_md": false 00:09:08.444 }, 00:09:08.444 "memory_domains": [ 00:09:08.444 { 00:09:08.444 "dma_device_id": "system", 00:09:08.444 "dma_device_type": 1 00:09:08.444 }, 00:09:08.444 { 00:09:08.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.444 "dma_device_type": 2 00:09:08.444 } 00:09:08.444 ], 00:09:08.444 "driver_specific": {} 00:09:08.444 } 00:09:08.444 ] 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.444 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.445 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.445 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.445 "name": "Existed_Raid", 00:09:08.445 "uuid": "c627b816-455a-4af0-8a1a-17e39ded27f0", 00:09:08.445 "strip_size_kb": 64, 00:09:08.445 "state": "configuring", 00:09:08.445 "raid_level": "raid0", 00:09:08.445 "superblock": true, 00:09:08.445 "num_base_bdevs": 4, 00:09:08.445 "num_base_bdevs_discovered": 2, 00:09:08.445 "num_base_bdevs_operational": 4, 00:09:08.445 "base_bdevs_list": [ 00:09:08.445 { 00:09:08.445 "name": "BaseBdev1", 00:09:08.445 "uuid": "a6d3ecea-6e06-4ef6-b09d-e18c767eed6b", 00:09:08.445 "is_configured": true, 00:09:08.445 "data_offset": 2048, 00:09:08.445 "data_size": 63488 00:09:08.445 }, 00:09:08.445 { 00:09:08.445 "name": "BaseBdev2", 00:09:08.445 "uuid": "b49c54f8-80dd-4511-9910-a836b315a91e", 00:09:08.445 "is_configured": true, 00:09:08.445 "data_offset": 2048, 00:09:08.445 "data_size": 63488 00:09:08.445 }, 00:09:08.445 { 00:09:08.445 "name": "BaseBdev3", 00:09:08.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.445 "is_configured": false, 00:09:08.445 "data_offset": 0, 00:09:08.445 "data_size": 0 00:09:08.445 }, 00:09:08.445 { 00:09:08.445 "name": "BaseBdev4", 00:09:08.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.445 "is_configured": false, 00:09:08.445 "data_offset": 0, 00:09:08.445 "data_size": 0 00:09:08.445 } 00:09:08.445 ] 00:09:08.445 }' 00:09:08.445 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.445 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.703 [2024-11-02 23:49:02.780189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.703 BaseBdev3 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.703 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 [ 00:09:08.961 { 00:09:08.961 "name": "BaseBdev3", 00:09:08.961 "aliases": [ 00:09:08.961 "a780327b-d41a-4c90-af9a-073c43205d2e" 00:09:08.961 ], 00:09:08.961 "product_name": "Malloc disk", 00:09:08.961 "block_size": 512, 00:09:08.961 "num_blocks": 65536, 00:09:08.961 "uuid": "a780327b-d41a-4c90-af9a-073c43205d2e", 00:09:08.961 "assigned_rate_limits": { 00:09:08.961 "rw_ios_per_sec": 0, 00:09:08.961 "rw_mbytes_per_sec": 0, 00:09:08.961 "r_mbytes_per_sec": 0, 00:09:08.961 "w_mbytes_per_sec": 0 00:09:08.961 }, 00:09:08.961 "claimed": true, 00:09:08.961 "claim_type": "exclusive_write", 00:09:08.962 "zoned": false, 00:09:08.962 "supported_io_types": { 00:09:08.962 "read": true, 00:09:08.962 "write": true, 00:09:08.962 "unmap": true, 00:09:08.962 "flush": true, 00:09:08.962 "reset": true, 00:09:08.962 "nvme_admin": false, 00:09:08.962 "nvme_io": false, 00:09:08.962 "nvme_io_md": false, 00:09:08.962 "write_zeroes": true, 00:09:08.962 "zcopy": true, 00:09:08.962 "get_zone_info": false, 00:09:08.962 "zone_management": false, 00:09:08.962 "zone_append": false, 00:09:08.962 "compare": false, 00:09:08.962 "compare_and_write": false, 00:09:08.962 "abort": true, 00:09:08.962 "seek_hole": false, 00:09:08.962 "seek_data": false, 00:09:08.962 "copy": true, 00:09:08.962 "nvme_iov_md": false 00:09:08.962 }, 00:09:08.962 "memory_domains": [ 00:09:08.962 { 00:09:08.962 "dma_device_id": "system", 00:09:08.962 "dma_device_type": 1 00:09:08.962 }, 00:09:08.962 { 00:09:08.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.962 "dma_device_type": 2 00:09:08.962 } 00:09:08.962 ], 00:09:08.962 "driver_specific": {} 00:09:08.962 } 00:09:08.962 ] 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.962 "name": "Existed_Raid", 00:09:08.962 "uuid": "c627b816-455a-4af0-8a1a-17e39ded27f0", 00:09:08.962 "strip_size_kb": 64, 00:09:08.962 "state": "configuring", 00:09:08.962 "raid_level": "raid0", 00:09:08.962 "superblock": true, 00:09:08.962 "num_base_bdevs": 4, 00:09:08.962 "num_base_bdevs_discovered": 3, 00:09:08.962 "num_base_bdevs_operational": 4, 00:09:08.962 "base_bdevs_list": [ 00:09:08.962 { 00:09:08.962 "name": "BaseBdev1", 00:09:08.962 "uuid": "a6d3ecea-6e06-4ef6-b09d-e18c767eed6b", 00:09:08.962 "is_configured": true, 00:09:08.962 "data_offset": 2048, 00:09:08.962 "data_size": 63488 00:09:08.962 }, 00:09:08.962 { 00:09:08.962 "name": "BaseBdev2", 00:09:08.962 "uuid": "b49c54f8-80dd-4511-9910-a836b315a91e", 00:09:08.962 "is_configured": true, 00:09:08.962 "data_offset": 2048, 00:09:08.962 "data_size": 63488 00:09:08.962 }, 00:09:08.962 { 00:09:08.962 "name": "BaseBdev3", 00:09:08.962 "uuid": "a780327b-d41a-4c90-af9a-073c43205d2e", 00:09:08.962 "is_configured": true, 00:09:08.962 "data_offset": 2048, 00:09:08.962 "data_size": 63488 00:09:08.962 }, 00:09:08.962 { 00:09:08.962 "name": "BaseBdev4", 00:09:08.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.962 "is_configured": false, 00:09:08.962 "data_offset": 0, 00:09:08.962 "data_size": 0 00:09:08.962 } 00:09:08.962 ] 00:09:08.962 }' 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.962 23:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.220 [2024-11-02 23:49:03.288843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:09.220 [2024-11-02 23:49:03.289226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:09.220 [2024-11-02 23:49:03.289250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:09.220 [2024-11-02 23:49:03.289606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:09.220 BaseBdev4 00:09:09.220 [2024-11-02 23:49:03.289771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:09.220 [2024-11-02 23:49:03.289799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:09.220 [2024-11-02 23:49:03.289938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.220 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.479 [ 00:09:09.479 { 00:09:09.479 "name": "BaseBdev4", 00:09:09.479 "aliases": [ 00:09:09.479 "8a6250a6-391a-4194-bb14-f79ebb009499" 00:09:09.479 ], 00:09:09.479 "product_name": "Malloc disk", 00:09:09.479 "block_size": 512, 00:09:09.479 "num_blocks": 65536, 00:09:09.479 "uuid": "8a6250a6-391a-4194-bb14-f79ebb009499", 00:09:09.479 "assigned_rate_limits": { 00:09:09.479 "rw_ios_per_sec": 0, 00:09:09.479 "rw_mbytes_per_sec": 0, 00:09:09.479 "r_mbytes_per_sec": 0, 00:09:09.479 "w_mbytes_per_sec": 0 00:09:09.479 }, 00:09:09.479 "claimed": true, 00:09:09.479 "claim_type": "exclusive_write", 00:09:09.479 "zoned": false, 00:09:09.479 "supported_io_types": { 00:09:09.479 "read": true, 00:09:09.479 "write": true, 00:09:09.479 "unmap": true, 00:09:09.479 "flush": true, 00:09:09.479 "reset": true, 00:09:09.479 "nvme_admin": false, 00:09:09.479 "nvme_io": false, 00:09:09.479 "nvme_io_md": false, 00:09:09.479 "write_zeroes": true, 00:09:09.479 "zcopy": true, 00:09:09.479 "get_zone_info": false, 00:09:09.479 "zone_management": false, 00:09:09.479 "zone_append": false, 00:09:09.479 "compare": false, 00:09:09.479 "compare_and_write": false, 00:09:09.479 "abort": true, 00:09:09.479 "seek_hole": false, 00:09:09.479 "seek_data": false, 00:09:09.479 "copy": true, 00:09:09.479 "nvme_iov_md": false 00:09:09.479 }, 00:09:09.479 "memory_domains": [ 00:09:09.479 { 00:09:09.479 "dma_device_id": "system", 00:09:09.479 "dma_device_type": 1 00:09:09.479 }, 00:09:09.479 { 00:09:09.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.479 "dma_device_type": 2 00:09:09.479 } 00:09:09.479 ], 00:09:09.479 "driver_specific": {} 00:09:09.479 } 00:09:09.479 ] 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.479 "name": "Existed_Raid", 00:09:09.479 "uuid": "c627b816-455a-4af0-8a1a-17e39ded27f0", 00:09:09.479 "strip_size_kb": 64, 00:09:09.479 "state": "online", 00:09:09.479 "raid_level": "raid0", 00:09:09.479 "superblock": true, 00:09:09.479 "num_base_bdevs": 4, 00:09:09.479 "num_base_bdevs_discovered": 4, 00:09:09.479 "num_base_bdevs_operational": 4, 00:09:09.479 "base_bdevs_list": [ 00:09:09.479 { 00:09:09.479 "name": "BaseBdev1", 00:09:09.479 "uuid": "a6d3ecea-6e06-4ef6-b09d-e18c767eed6b", 00:09:09.479 "is_configured": true, 00:09:09.479 "data_offset": 2048, 00:09:09.479 "data_size": 63488 00:09:09.479 }, 00:09:09.479 { 00:09:09.479 "name": "BaseBdev2", 00:09:09.479 "uuid": "b49c54f8-80dd-4511-9910-a836b315a91e", 00:09:09.479 "is_configured": true, 00:09:09.479 "data_offset": 2048, 00:09:09.479 "data_size": 63488 00:09:09.479 }, 00:09:09.479 { 00:09:09.479 "name": "BaseBdev3", 00:09:09.479 "uuid": "a780327b-d41a-4c90-af9a-073c43205d2e", 00:09:09.479 "is_configured": true, 00:09:09.479 "data_offset": 2048, 00:09:09.479 "data_size": 63488 00:09:09.479 }, 00:09:09.479 { 00:09:09.479 "name": "BaseBdev4", 00:09:09.479 "uuid": "8a6250a6-391a-4194-bb14-f79ebb009499", 00:09:09.479 "is_configured": true, 00:09:09.479 "data_offset": 2048, 00:09:09.479 "data_size": 63488 00:09:09.479 } 00:09:09.479 ] 00:09:09.479 }' 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.479 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.738 [2024-11-02 23:49:03.752490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.738 "name": "Existed_Raid", 00:09:09.738 "aliases": [ 00:09:09.738 "c627b816-455a-4af0-8a1a-17e39ded27f0" 00:09:09.738 ], 00:09:09.738 "product_name": "Raid Volume", 00:09:09.738 "block_size": 512, 00:09:09.738 "num_blocks": 253952, 00:09:09.738 "uuid": "c627b816-455a-4af0-8a1a-17e39ded27f0", 00:09:09.738 "assigned_rate_limits": { 00:09:09.738 "rw_ios_per_sec": 0, 00:09:09.738 "rw_mbytes_per_sec": 0, 00:09:09.738 "r_mbytes_per_sec": 0, 00:09:09.738 "w_mbytes_per_sec": 0 00:09:09.738 }, 00:09:09.738 "claimed": false, 00:09:09.738 "zoned": false, 00:09:09.738 "supported_io_types": { 00:09:09.738 "read": true, 00:09:09.738 "write": true, 00:09:09.738 "unmap": true, 00:09:09.738 "flush": true, 00:09:09.738 "reset": true, 00:09:09.738 "nvme_admin": false, 00:09:09.738 "nvme_io": false, 00:09:09.738 "nvme_io_md": false, 00:09:09.738 "write_zeroes": true, 00:09:09.738 "zcopy": false, 00:09:09.738 "get_zone_info": false, 00:09:09.738 "zone_management": false, 00:09:09.738 "zone_append": false, 00:09:09.738 "compare": false, 00:09:09.738 "compare_and_write": false, 00:09:09.738 "abort": false, 00:09:09.738 "seek_hole": false, 00:09:09.738 "seek_data": false, 00:09:09.738 "copy": false, 00:09:09.738 "nvme_iov_md": false 00:09:09.738 }, 00:09:09.738 "memory_domains": [ 00:09:09.738 { 00:09:09.738 "dma_device_id": "system", 00:09:09.738 "dma_device_type": 1 00:09:09.738 }, 00:09:09.738 { 00:09:09.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.738 "dma_device_type": 2 00:09:09.738 }, 00:09:09.738 { 00:09:09.738 "dma_device_id": "system", 00:09:09.738 "dma_device_type": 1 00:09:09.738 }, 00:09:09.738 { 00:09:09.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.738 "dma_device_type": 2 00:09:09.738 }, 00:09:09.738 { 00:09:09.738 "dma_device_id": "system", 00:09:09.738 "dma_device_type": 1 00:09:09.738 }, 00:09:09.738 { 00:09:09.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.738 "dma_device_type": 2 00:09:09.738 }, 00:09:09.738 { 00:09:09.738 "dma_device_id": "system", 00:09:09.738 "dma_device_type": 1 00:09:09.738 }, 00:09:09.738 { 00:09:09.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.738 "dma_device_type": 2 00:09:09.738 } 00:09:09.738 ], 00:09:09.738 "driver_specific": { 00:09:09.738 "raid": { 00:09:09.738 "uuid": "c627b816-455a-4af0-8a1a-17e39ded27f0", 00:09:09.738 "strip_size_kb": 64, 00:09:09.738 "state": "online", 00:09:09.738 "raid_level": "raid0", 00:09:09.738 "superblock": true, 00:09:09.738 "num_base_bdevs": 4, 00:09:09.738 "num_base_bdevs_discovered": 4, 00:09:09.738 "num_base_bdevs_operational": 4, 00:09:09.738 "base_bdevs_list": [ 00:09:09.738 { 00:09:09.738 "name": "BaseBdev1", 00:09:09.738 "uuid": "a6d3ecea-6e06-4ef6-b09d-e18c767eed6b", 00:09:09.738 "is_configured": true, 00:09:09.738 "data_offset": 2048, 00:09:09.738 "data_size": 63488 00:09:09.738 }, 00:09:09.738 { 00:09:09.738 "name": "BaseBdev2", 00:09:09.738 "uuid": "b49c54f8-80dd-4511-9910-a836b315a91e", 00:09:09.738 "is_configured": true, 00:09:09.738 "data_offset": 2048, 00:09:09.738 "data_size": 63488 00:09:09.738 }, 00:09:09.738 { 00:09:09.738 "name": "BaseBdev3", 00:09:09.738 "uuid": "a780327b-d41a-4c90-af9a-073c43205d2e", 00:09:09.738 "is_configured": true, 00:09:09.738 "data_offset": 2048, 00:09:09.738 "data_size": 63488 00:09:09.738 }, 00:09:09.738 { 00:09:09.738 "name": "BaseBdev4", 00:09:09.738 "uuid": "8a6250a6-391a-4194-bb14-f79ebb009499", 00:09:09.738 "is_configured": true, 00:09:09.738 "data_offset": 2048, 00:09:09.738 "data_size": 63488 00:09:09.738 } 00:09:09.738 ] 00:09:09.738 } 00:09:09.738 } 00:09:09.738 }' 00:09:09.738 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:09.997 BaseBdev2 00:09:09.997 BaseBdev3 00:09:09.997 BaseBdev4' 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.997 23:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.997 [2024-11-02 23:49:04.043727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.997 [2024-11-02 23:49:04.043891] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.997 [2024-11-02 23:49:04.043974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:09.997 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.998 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.256 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.256 "name": "Existed_Raid", 00:09:10.256 "uuid": "c627b816-455a-4af0-8a1a-17e39ded27f0", 00:09:10.256 "strip_size_kb": 64, 00:09:10.256 "state": "offline", 00:09:10.256 "raid_level": "raid0", 00:09:10.256 "superblock": true, 00:09:10.256 "num_base_bdevs": 4, 00:09:10.256 "num_base_bdevs_discovered": 3, 00:09:10.256 "num_base_bdevs_operational": 3, 00:09:10.256 "base_bdevs_list": [ 00:09:10.256 { 00:09:10.256 "name": null, 00:09:10.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.256 "is_configured": false, 00:09:10.256 "data_offset": 0, 00:09:10.256 "data_size": 63488 00:09:10.256 }, 00:09:10.256 { 00:09:10.256 "name": "BaseBdev2", 00:09:10.256 "uuid": "b49c54f8-80dd-4511-9910-a836b315a91e", 00:09:10.256 "is_configured": true, 00:09:10.256 "data_offset": 2048, 00:09:10.256 "data_size": 63488 00:09:10.256 }, 00:09:10.256 { 00:09:10.256 "name": "BaseBdev3", 00:09:10.256 "uuid": "a780327b-d41a-4c90-af9a-073c43205d2e", 00:09:10.256 "is_configured": true, 00:09:10.256 "data_offset": 2048, 00:09:10.256 "data_size": 63488 00:09:10.256 }, 00:09:10.256 { 00:09:10.256 "name": "BaseBdev4", 00:09:10.256 "uuid": "8a6250a6-391a-4194-bb14-f79ebb009499", 00:09:10.256 "is_configured": true, 00:09:10.256 "data_offset": 2048, 00:09:10.256 "data_size": 63488 00:09:10.256 } 00:09:10.256 ] 00:09:10.256 }' 00:09:10.256 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.256 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.514 [2024-11-02 23:49:04.556547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.514 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.774 [2024-11-02 23:49:04.637845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.774 [2024-11-02 23:49:04.719361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:10.774 [2024-11-02 23:49:04.719545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:10.774 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.775 BaseBdev2 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.775 [ 00:09:10.775 { 00:09:10.775 "name": "BaseBdev2", 00:09:10.775 "aliases": [ 00:09:10.775 "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7" 00:09:10.775 ], 00:09:10.775 "product_name": "Malloc disk", 00:09:10.775 "block_size": 512, 00:09:10.775 "num_blocks": 65536, 00:09:10.775 "uuid": "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7", 00:09:10.775 "assigned_rate_limits": { 00:09:10.775 "rw_ios_per_sec": 0, 00:09:10.775 "rw_mbytes_per_sec": 0, 00:09:10.775 "r_mbytes_per_sec": 0, 00:09:10.775 "w_mbytes_per_sec": 0 00:09:10.775 }, 00:09:10.775 "claimed": false, 00:09:10.775 "zoned": false, 00:09:10.775 "supported_io_types": { 00:09:10.775 "read": true, 00:09:10.775 "write": true, 00:09:10.775 "unmap": true, 00:09:10.775 "flush": true, 00:09:10.775 "reset": true, 00:09:10.775 "nvme_admin": false, 00:09:10.775 "nvme_io": false, 00:09:10.775 "nvme_io_md": false, 00:09:10.775 "write_zeroes": true, 00:09:10.775 "zcopy": true, 00:09:10.775 "get_zone_info": false, 00:09:10.775 "zone_management": false, 00:09:10.775 "zone_append": false, 00:09:10.775 "compare": false, 00:09:10.775 "compare_and_write": false, 00:09:10.775 "abort": true, 00:09:10.775 "seek_hole": false, 00:09:10.775 "seek_data": false, 00:09:10.775 "copy": true, 00:09:10.775 "nvme_iov_md": false 00:09:10.775 }, 00:09:10.775 "memory_domains": [ 00:09:10.775 { 00:09:10.775 "dma_device_id": "system", 00:09:10.775 "dma_device_type": 1 00:09:10.775 }, 00:09:10.775 { 00:09:10.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.775 "dma_device_type": 2 00:09:10.775 } 00:09:10.775 ], 00:09:10.775 "driver_specific": {} 00:09:10.775 } 00:09:10.775 ] 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.775 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.035 BaseBdev3 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.035 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.035 [ 00:09:11.035 { 00:09:11.035 "name": "BaseBdev3", 00:09:11.035 "aliases": [ 00:09:11.035 "7cd359fb-fad4-44c1-bbf3-4a97018e03fb" 00:09:11.035 ], 00:09:11.035 "product_name": "Malloc disk", 00:09:11.035 "block_size": 512, 00:09:11.035 "num_blocks": 65536, 00:09:11.035 "uuid": "7cd359fb-fad4-44c1-bbf3-4a97018e03fb", 00:09:11.035 "assigned_rate_limits": { 00:09:11.036 "rw_ios_per_sec": 0, 00:09:11.036 "rw_mbytes_per_sec": 0, 00:09:11.036 "r_mbytes_per_sec": 0, 00:09:11.036 "w_mbytes_per_sec": 0 00:09:11.036 }, 00:09:11.036 "claimed": false, 00:09:11.036 "zoned": false, 00:09:11.036 "supported_io_types": { 00:09:11.036 "read": true, 00:09:11.036 "write": true, 00:09:11.036 "unmap": true, 00:09:11.036 "flush": true, 00:09:11.036 "reset": true, 00:09:11.036 "nvme_admin": false, 00:09:11.036 "nvme_io": false, 00:09:11.036 "nvme_io_md": false, 00:09:11.036 "write_zeroes": true, 00:09:11.036 "zcopy": true, 00:09:11.036 "get_zone_info": false, 00:09:11.036 "zone_management": false, 00:09:11.036 "zone_append": false, 00:09:11.036 "compare": false, 00:09:11.036 "compare_and_write": false, 00:09:11.036 "abort": true, 00:09:11.036 "seek_hole": false, 00:09:11.036 "seek_data": false, 00:09:11.036 "copy": true, 00:09:11.036 "nvme_iov_md": false 00:09:11.036 }, 00:09:11.036 "memory_domains": [ 00:09:11.036 { 00:09:11.036 "dma_device_id": "system", 00:09:11.036 "dma_device_type": 1 00:09:11.036 }, 00:09:11.036 { 00:09:11.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.036 "dma_device_type": 2 00:09:11.036 } 00:09:11.036 ], 00:09:11.036 "driver_specific": {} 00:09:11.036 } 00:09:11.036 ] 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.036 BaseBdev4 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.036 [ 00:09:11.036 { 00:09:11.036 "name": "BaseBdev4", 00:09:11.036 "aliases": [ 00:09:11.036 "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41" 00:09:11.036 ], 00:09:11.036 "product_name": "Malloc disk", 00:09:11.036 "block_size": 512, 00:09:11.036 "num_blocks": 65536, 00:09:11.036 "uuid": "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41", 00:09:11.036 "assigned_rate_limits": { 00:09:11.036 "rw_ios_per_sec": 0, 00:09:11.036 "rw_mbytes_per_sec": 0, 00:09:11.036 "r_mbytes_per_sec": 0, 00:09:11.036 "w_mbytes_per_sec": 0 00:09:11.036 }, 00:09:11.036 "claimed": false, 00:09:11.036 "zoned": false, 00:09:11.036 "supported_io_types": { 00:09:11.036 "read": true, 00:09:11.036 "write": true, 00:09:11.036 "unmap": true, 00:09:11.036 "flush": true, 00:09:11.036 "reset": true, 00:09:11.036 "nvme_admin": false, 00:09:11.036 "nvme_io": false, 00:09:11.036 "nvme_io_md": false, 00:09:11.036 "write_zeroes": true, 00:09:11.036 "zcopy": true, 00:09:11.036 "get_zone_info": false, 00:09:11.036 "zone_management": false, 00:09:11.036 "zone_append": false, 00:09:11.036 "compare": false, 00:09:11.036 "compare_and_write": false, 00:09:11.036 "abort": true, 00:09:11.036 "seek_hole": false, 00:09:11.036 "seek_data": false, 00:09:11.036 "copy": true, 00:09:11.036 "nvme_iov_md": false 00:09:11.036 }, 00:09:11.036 "memory_domains": [ 00:09:11.036 { 00:09:11.036 "dma_device_id": "system", 00:09:11.036 "dma_device_type": 1 00:09:11.036 }, 00:09:11.036 { 00:09:11.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.036 "dma_device_type": 2 00:09:11.036 } 00:09:11.036 ], 00:09:11.036 "driver_specific": {} 00:09:11.036 } 00:09:11.036 ] 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.036 [2024-11-02 23:49:04.978867] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.036 [2024-11-02 23:49:04.979031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.036 [2024-11-02 23:49:04.979127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.036 [2024-11-02 23:49:04.981513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.036 [2024-11-02 23:49:04.981638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.036 23:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.036 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.036 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.036 "name": "Existed_Raid", 00:09:11.036 "uuid": "93efb38a-77a2-4a4f-9834-a1d46ee584cb", 00:09:11.036 "strip_size_kb": 64, 00:09:11.036 "state": "configuring", 00:09:11.036 "raid_level": "raid0", 00:09:11.036 "superblock": true, 00:09:11.036 "num_base_bdevs": 4, 00:09:11.036 "num_base_bdevs_discovered": 3, 00:09:11.036 "num_base_bdevs_operational": 4, 00:09:11.036 "base_bdevs_list": [ 00:09:11.036 { 00:09:11.036 "name": "BaseBdev1", 00:09:11.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.036 "is_configured": false, 00:09:11.036 "data_offset": 0, 00:09:11.036 "data_size": 0 00:09:11.036 }, 00:09:11.036 { 00:09:11.036 "name": "BaseBdev2", 00:09:11.036 "uuid": "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7", 00:09:11.036 "is_configured": true, 00:09:11.036 "data_offset": 2048, 00:09:11.036 "data_size": 63488 00:09:11.036 }, 00:09:11.036 { 00:09:11.036 "name": "BaseBdev3", 00:09:11.036 "uuid": "7cd359fb-fad4-44c1-bbf3-4a97018e03fb", 00:09:11.036 "is_configured": true, 00:09:11.036 "data_offset": 2048, 00:09:11.036 "data_size": 63488 00:09:11.036 }, 00:09:11.036 { 00:09:11.036 "name": "BaseBdev4", 00:09:11.036 "uuid": "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41", 00:09:11.036 "is_configured": true, 00:09:11.036 "data_offset": 2048, 00:09:11.036 "data_size": 63488 00:09:11.036 } 00:09:11.036 ] 00:09:11.036 }' 00:09:11.036 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.036 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.330 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:11.330 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.330 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.330 [2024-11-02 23:49:05.414055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.603 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.604 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.604 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.604 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.604 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.604 "name": "Existed_Raid", 00:09:11.604 "uuid": "93efb38a-77a2-4a4f-9834-a1d46ee584cb", 00:09:11.604 "strip_size_kb": 64, 00:09:11.604 "state": "configuring", 00:09:11.604 "raid_level": "raid0", 00:09:11.604 "superblock": true, 00:09:11.604 "num_base_bdevs": 4, 00:09:11.604 "num_base_bdevs_discovered": 2, 00:09:11.604 "num_base_bdevs_operational": 4, 00:09:11.604 "base_bdevs_list": [ 00:09:11.604 { 00:09:11.604 "name": "BaseBdev1", 00:09:11.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.604 "is_configured": false, 00:09:11.604 "data_offset": 0, 00:09:11.604 "data_size": 0 00:09:11.604 }, 00:09:11.604 { 00:09:11.604 "name": null, 00:09:11.604 "uuid": "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7", 00:09:11.604 "is_configured": false, 00:09:11.604 "data_offset": 0, 00:09:11.604 "data_size": 63488 00:09:11.604 }, 00:09:11.604 { 00:09:11.604 "name": "BaseBdev3", 00:09:11.604 "uuid": "7cd359fb-fad4-44c1-bbf3-4a97018e03fb", 00:09:11.604 "is_configured": true, 00:09:11.604 "data_offset": 2048, 00:09:11.604 "data_size": 63488 00:09:11.604 }, 00:09:11.604 { 00:09:11.604 "name": "BaseBdev4", 00:09:11.604 "uuid": "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41", 00:09:11.604 "is_configured": true, 00:09:11.604 "data_offset": 2048, 00:09:11.604 "data_size": 63488 00:09:11.604 } 00:09:11.604 ] 00:09:11.604 }' 00:09:11.604 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.604 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.864 [2024-11-02 23:49:05.906106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.864 BaseBdev1 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.864 [ 00:09:11.864 { 00:09:11.864 "name": "BaseBdev1", 00:09:11.864 "aliases": [ 00:09:11.864 "81ac9c0c-3604-4386-bd18-66680e25de58" 00:09:11.864 ], 00:09:11.864 "product_name": "Malloc disk", 00:09:11.864 "block_size": 512, 00:09:11.864 "num_blocks": 65536, 00:09:11.864 "uuid": "81ac9c0c-3604-4386-bd18-66680e25de58", 00:09:11.864 "assigned_rate_limits": { 00:09:11.864 "rw_ios_per_sec": 0, 00:09:11.864 "rw_mbytes_per_sec": 0, 00:09:11.864 "r_mbytes_per_sec": 0, 00:09:11.864 "w_mbytes_per_sec": 0 00:09:11.864 }, 00:09:11.864 "claimed": true, 00:09:11.864 "claim_type": "exclusive_write", 00:09:11.864 "zoned": false, 00:09:11.864 "supported_io_types": { 00:09:11.864 "read": true, 00:09:11.864 "write": true, 00:09:11.864 "unmap": true, 00:09:11.864 "flush": true, 00:09:11.864 "reset": true, 00:09:11.864 "nvme_admin": false, 00:09:11.864 "nvme_io": false, 00:09:11.864 "nvme_io_md": false, 00:09:11.864 "write_zeroes": true, 00:09:11.864 "zcopy": true, 00:09:11.864 "get_zone_info": false, 00:09:11.864 "zone_management": false, 00:09:11.864 "zone_append": false, 00:09:11.864 "compare": false, 00:09:11.864 "compare_and_write": false, 00:09:11.864 "abort": true, 00:09:11.864 "seek_hole": false, 00:09:11.864 "seek_data": false, 00:09:11.864 "copy": true, 00:09:11.864 "nvme_iov_md": false 00:09:11.864 }, 00:09:11.864 "memory_domains": [ 00:09:11.864 { 00:09:11.864 "dma_device_id": "system", 00:09:11.864 "dma_device_type": 1 00:09:11.864 }, 00:09:11.864 { 00:09:11.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.864 "dma_device_type": 2 00:09:11.864 } 00:09:11.864 ], 00:09:11.864 "driver_specific": {} 00:09:11.864 } 00:09:11.864 ] 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.864 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.124 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.124 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.124 "name": "Existed_Raid", 00:09:12.124 "uuid": "93efb38a-77a2-4a4f-9834-a1d46ee584cb", 00:09:12.124 "strip_size_kb": 64, 00:09:12.124 "state": "configuring", 00:09:12.124 "raid_level": "raid0", 00:09:12.124 "superblock": true, 00:09:12.124 "num_base_bdevs": 4, 00:09:12.124 "num_base_bdevs_discovered": 3, 00:09:12.124 "num_base_bdevs_operational": 4, 00:09:12.124 "base_bdevs_list": [ 00:09:12.124 { 00:09:12.124 "name": "BaseBdev1", 00:09:12.124 "uuid": "81ac9c0c-3604-4386-bd18-66680e25de58", 00:09:12.124 "is_configured": true, 00:09:12.124 "data_offset": 2048, 00:09:12.124 "data_size": 63488 00:09:12.124 }, 00:09:12.124 { 00:09:12.124 "name": null, 00:09:12.124 "uuid": "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7", 00:09:12.124 "is_configured": false, 00:09:12.124 "data_offset": 0, 00:09:12.124 "data_size": 63488 00:09:12.124 }, 00:09:12.124 { 00:09:12.124 "name": "BaseBdev3", 00:09:12.124 "uuid": "7cd359fb-fad4-44c1-bbf3-4a97018e03fb", 00:09:12.124 "is_configured": true, 00:09:12.124 "data_offset": 2048, 00:09:12.124 "data_size": 63488 00:09:12.124 }, 00:09:12.124 { 00:09:12.124 "name": "BaseBdev4", 00:09:12.124 "uuid": "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41", 00:09:12.124 "is_configured": true, 00:09:12.124 "data_offset": 2048, 00:09:12.124 "data_size": 63488 00:09:12.124 } 00:09:12.124 ] 00:09:12.124 }' 00:09:12.124 23:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.124 23:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.383 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.383 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:12.383 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.383 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.384 [2024-11-02 23:49:06.401370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.384 "name": "Existed_Raid", 00:09:12.384 "uuid": "93efb38a-77a2-4a4f-9834-a1d46ee584cb", 00:09:12.384 "strip_size_kb": 64, 00:09:12.384 "state": "configuring", 00:09:12.384 "raid_level": "raid0", 00:09:12.384 "superblock": true, 00:09:12.384 "num_base_bdevs": 4, 00:09:12.384 "num_base_bdevs_discovered": 2, 00:09:12.384 "num_base_bdevs_operational": 4, 00:09:12.384 "base_bdevs_list": [ 00:09:12.384 { 00:09:12.384 "name": "BaseBdev1", 00:09:12.384 "uuid": "81ac9c0c-3604-4386-bd18-66680e25de58", 00:09:12.384 "is_configured": true, 00:09:12.384 "data_offset": 2048, 00:09:12.384 "data_size": 63488 00:09:12.384 }, 00:09:12.384 { 00:09:12.384 "name": null, 00:09:12.384 "uuid": "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7", 00:09:12.384 "is_configured": false, 00:09:12.384 "data_offset": 0, 00:09:12.384 "data_size": 63488 00:09:12.384 }, 00:09:12.384 { 00:09:12.384 "name": null, 00:09:12.384 "uuid": "7cd359fb-fad4-44c1-bbf3-4a97018e03fb", 00:09:12.384 "is_configured": false, 00:09:12.384 "data_offset": 0, 00:09:12.384 "data_size": 63488 00:09:12.384 }, 00:09:12.384 { 00:09:12.384 "name": "BaseBdev4", 00:09:12.384 "uuid": "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41", 00:09:12.384 "is_configured": true, 00:09:12.384 "data_offset": 2048, 00:09:12.384 "data_size": 63488 00:09:12.384 } 00:09:12.384 ] 00:09:12.384 }' 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.384 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.953 [2024-11-02 23:49:06.884585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.953 "name": "Existed_Raid", 00:09:12.953 "uuid": "93efb38a-77a2-4a4f-9834-a1d46ee584cb", 00:09:12.953 "strip_size_kb": 64, 00:09:12.953 "state": "configuring", 00:09:12.953 "raid_level": "raid0", 00:09:12.953 "superblock": true, 00:09:12.953 "num_base_bdevs": 4, 00:09:12.953 "num_base_bdevs_discovered": 3, 00:09:12.953 "num_base_bdevs_operational": 4, 00:09:12.953 "base_bdevs_list": [ 00:09:12.953 { 00:09:12.953 "name": "BaseBdev1", 00:09:12.953 "uuid": "81ac9c0c-3604-4386-bd18-66680e25de58", 00:09:12.953 "is_configured": true, 00:09:12.953 "data_offset": 2048, 00:09:12.953 "data_size": 63488 00:09:12.953 }, 00:09:12.953 { 00:09:12.953 "name": null, 00:09:12.953 "uuid": "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7", 00:09:12.953 "is_configured": false, 00:09:12.953 "data_offset": 0, 00:09:12.953 "data_size": 63488 00:09:12.953 }, 00:09:12.953 { 00:09:12.953 "name": "BaseBdev3", 00:09:12.953 "uuid": "7cd359fb-fad4-44c1-bbf3-4a97018e03fb", 00:09:12.953 "is_configured": true, 00:09:12.953 "data_offset": 2048, 00:09:12.953 "data_size": 63488 00:09:12.953 }, 00:09:12.953 { 00:09:12.953 "name": "BaseBdev4", 00:09:12.953 "uuid": "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41", 00:09:12.953 "is_configured": true, 00:09:12.953 "data_offset": 2048, 00:09:12.953 "data_size": 63488 00:09:12.953 } 00:09:12.953 ] 00:09:12.953 }' 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.953 23:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.522 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:13.522 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.522 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.522 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.522 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.522 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:13.522 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:13.522 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.523 [2024-11-02 23:49:07.340002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.523 "name": "Existed_Raid", 00:09:13.523 "uuid": "93efb38a-77a2-4a4f-9834-a1d46ee584cb", 00:09:13.523 "strip_size_kb": 64, 00:09:13.523 "state": "configuring", 00:09:13.523 "raid_level": "raid0", 00:09:13.523 "superblock": true, 00:09:13.523 "num_base_bdevs": 4, 00:09:13.523 "num_base_bdevs_discovered": 2, 00:09:13.523 "num_base_bdevs_operational": 4, 00:09:13.523 "base_bdevs_list": [ 00:09:13.523 { 00:09:13.523 "name": null, 00:09:13.523 "uuid": "81ac9c0c-3604-4386-bd18-66680e25de58", 00:09:13.523 "is_configured": false, 00:09:13.523 "data_offset": 0, 00:09:13.523 "data_size": 63488 00:09:13.523 }, 00:09:13.523 { 00:09:13.523 "name": null, 00:09:13.523 "uuid": "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7", 00:09:13.523 "is_configured": false, 00:09:13.523 "data_offset": 0, 00:09:13.523 "data_size": 63488 00:09:13.523 }, 00:09:13.523 { 00:09:13.523 "name": "BaseBdev3", 00:09:13.523 "uuid": "7cd359fb-fad4-44c1-bbf3-4a97018e03fb", 00:09:13.523 "is_configured": true, 00:09:13.523 "data_offset": 2048, 00:09:13.523 "data_size": 63488 00:09:13.523 }, 00:09:13.523 { 00:09:13.523 "name": "BaseBdev4", 00:09:13.523 "uuid": "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41", 00:09:13.523 "is_configured": true, 00:09:13.523 "data_offset": 2048, 00:09:13.523 "data_size": 63488 00:09:13.523 } 00:09:13.523 ] 00:09:13.523 }' 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.523 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.782 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.782 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.782 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.782 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.782 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.783 [2024-11-02 23:49:07.835347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.783 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.042 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.042 "name": "Existed_Raid", 00:09:14.042 "uuid": "93efb38a-77a2-4a4f-9834-a1d46ee584cb", 00:09:14.042 "strip_size_kb": 64, 00:09:14.042 "state": "configuring", 00:09:14.042 "raid_level": "raid0", 00:09:14.042 "superblock": true, 00:09:14.042 "num_base_bdevs": 4, 00:09:14.042 "num_base_bdevs_discovered": 3, 00:09:14.042 "num_base_bdevs_operational": 4, 00:09:14.042 "base_bdevs_list": [ 00:09:14.042 { 00:09:14.042 "name": null, 00:09:14.042 "uuid": "81ac9c0c-3604-4386-bd18-66680e25de58", 00:09:14.042 "is_configured": false, 00:09:14.042 "data_offset": 0, 00:09:14.042 "data_size": 63488 00:09:14.042 }, 00:09:14.042 { 00:09:14.042 "name": "BaseBdev2", 00:09:14.042 "uuid": "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7", 00:09:14.042 "is_configured": true, 00:09:14.042 "data_offset": 2048, 00:09:14.042 "data_size": 63488 00:09:14.042 }, 00:09:14.042 { 00:09:14.042 "name": "BaseBdev3", 00:09:14.042 "uuid": "7cd359fb-fad4-44c1-bbf3-4a97018e03fb", 00:09:14.042 "is_configured": true, 00:09:14.042 "data_offset": 2048, 00:09:14.042 "data_size": 63488 00:09:14.042 }, 00:09:14.042 { 00:09:14.042 "name": "BaseBdev4", 00:09:14.042 "uuid": "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41", 00:09:14.042 "is_configured": true, 00:09:14.042 "data_offset": 2048, 00:09:14.042 "data_size": 63488 00:09:14.042 } 00:09:14.042 ] 00:09:14.042 }' 00:09:14.042 23:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.042 23:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 81ac9c0c-3604-4386-bd18-66680e25de58 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.302 [2024-11-02 23:49:08.323616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:14.302 [2024-11-02 23:49:08.324008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:14.302 [2024-11-02 23:49:08.324071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:14.302 [2024-11-02 23:49:08.324405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:14.302 NewBaseBdev 00:09:14.302 [2024-11-02 23:49:08.324584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:14.302 [2024-11-02 23:49:08.324608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:14.302 [2024-11-02 23:49:08.324729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.302 [ 00:09:14.302 { 00:09:14.302 "name": "NewBaseBdev", 00:09:14.302 "aliases": [ 00:09:14.302 "81ac9c0c-3604-4386-bd18-66680e25de58" 00:09:14.302 ], 00:09:14.302 "product_name": "Malloc disk", 00:09:14.302 "block_size": 512, 00:09:14.302 "num_blocks": 65536, 00:09:14.302 "uuid": "81ac9c0c-3604-4386-bd18-66680e25de58", 00:09:14.302 "assigned_rate_limits": { 00:09:14.302 "rw_ios_per_sec": 0, 00:09:14.302 "rw_mbytes_per_sec": 0, 00:09:14.302 "r_mbytes_per_sec": 0, 00:09:14.302 "w_mbytes_per_sec": 0 00:09:14.302 }, 00:09:14.302 "claimed": true, 00:09:14.302 "claim_type": "exclusive_write", 00:09:14.302 "zoned": false, 00:09:14.302 "supported_io_types": { 00:09:14.302 "read": true, 00:09:14.302 "write": true, 00:09:14.302 "unmap": true, 00:09:14.302 "flush": true, 00:09:14.302 "reset": true, 00:09:14.302 "nvme_admin": false, 00:09:14.302 "nvme_io": false, 00:09:14.302 "nvme_io_md": false, 00:09:14.302 "write_zeroes": true, 00:09:14.302 "zcopy": true, 00:09:14.302 "get_zone_info": false, 00:09:14.302 "zone_management": false, 00:09:14.302 "zone_append": false, 00:09:14.302 "compare": false, 00:09:14.302 "compare_and_write": false, 00:09:14.302 "abort": true, 00:09:14.302 "seek_hole": false, 00:09:14.302 "seek_data": false, 00:09:14.302 "copy": true, 00:09:14.302 "nvme_iov_md": false 00:09:14.302 }, 00:09:14.302 "memory_domains": [ 00:09:14.302 { 00:09:14.302 "dma_device_id": "system", 00:09:14.302 "dma_device_type": 1 00:09:14.302 }, 00:09:14.302 { 00:09:14.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.302 "dma_device_type": 2 00:09:14.302 } 00:09:14.302 ], 00:09:14.302 "driver_specific": {} 00:09:14.302 } 00:09:14.302 ] 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.302 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.562 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.562 "name": "Existed_Raid", 00:09:14.562 "uuid": "93efb38a-77a2-4a4f-9834-a1d46ee584cb", 00:09:14.562 "strip_size_kb": 64, 00:09:14.562 "state": "online", 00:09:14.562 "raid_level": "raid0", 00:09:14.562 "superblock": true, 00:09:14.562 "num_base_bdevs": 4, 00:09:14.562 "num_base_bdevs_discovered": 4, 00:09:14.562 "num_base_bdevs_operational": 4, 00:09:14.562 "base_bdevs_list": [ 00:09:14.562 { 00:09:14.562 "name": "NewBaseBdev", 00:09:14.562 "uuid": "81ac9c0c-3604-4386-bd18-66680e25de58", 00:09:14.562 "is_configured": true, 00:09:14.562 "data_offset": 2048, 00:09:14.562 "data_size": 63488 00:09:14.562 }, 00:09:14.562 { 00:09:14.562 "name": "BaseBdev2", 00:09:14.562 "uuid": "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7", 00:09:14.562 "is_configured": true, 00:09:14.562 "data_offset": 2048, 00:09:14.562 "data_size": 63488 00:09:14.562 }, 00:09:14.562 { 00:09:14.562 "name": "BaseBdev3", 00:09:14.562 "uuid": "7cd359fb-fad4-44c1-bbf3-4a97018e03fb", 00:09:14.562 "is_configured": true, 00:09:14.562 "data_offset": 2048, 00:09:14.562 "data_size": 63488 00:09:14.562 }, 00:09:14.562 { 00:09:14.562 "name": "BaseBdev4", 00:09:14.562 "uuid": "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41", 00:09:14.562 "is_configured": true, 00:09:14.562 "data_offset": 2048, 00:09:14.562 "data_size": 63488 00:09:14.562 } 00:09:14.562 ] 00:09:14.562 }' 00:09:14.562 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.562 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.822 [2024-11-02 23:49:08.851260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.822 "name": "Existed_Raid", 00:09:14.822 "aliases": [ 00:09:14.822 "93efb38a-77a2-4a4f-9834-a1d46ee584cb" 00:09:14.822 ], 00:09:14.822 "product_name": "Raid Volume", 00:09:14.822 "block_size": 512, 00:09:14.822 "num_blocks": 253952, 00:09:14.822 "uuid": "93efb38a-77a2-4a4f-9834-a1d46ee584cb", 00:09:14.822 "assigned_rate_limits": { 00:09:14.822 "rw_ios_per_sec": 0, 00:09:14.822 "rw_mbytes_per_sec": 0, 00:09:14.822 "r_mbytes_per_sec": 0, 00:09:14.822 "w_mbytes_per_sec": 0 00:09:14.822 }, 00:09:14.822 "claimed": false, 00:09:14.822 "zoned": false, 00:09:14.822 "supported_io_types": { 00:09:14.822 "read": true, 00:09:14.822 "write": true, 00:09:14.822 "unmap": true, 00:09:14.822 "flush": true, 00:09:14.822 "reset": true, 00:09:14.822 "nvme_admin": false, 00:09:14.822 "nvme_io": false, 00:09:14.822 "nvme_io_md": false, 00:09:14.822 "write_zeroes": true, 00:09:14.822 "zcopy": false, 00:09:14.822 "get_zone_info": false, 00:09:14.822 "zone_management": false, 00:09:14.822 "zone_append": false, 00:09:14.822 "compare": false, 00:09:14.822 "compare_and_write": false, 00:09:14.822 "abort": false, 00:09:14.822 "seek_hole": false, 00:09:14.822 "seek_data": false, 00:09:14.822 "copy": false, 00:09:14.822 "nvme_iov_md": false 00:09:14.822 }, 00:09:14.822 "memory_domains": [ 00:09:14.822 { 00:09:14.822 "dma_device_id": "system", 00:09:14.822 "dma_device_type": 1 00:09:14.822 }, 00:09:14.822 { 00:09:14.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.822 "dma_device_type": 2 00:09:14.822 }, 00:09:14.822 { 00:09:14.822 "dma_device_id": "system", 00:09:14.822 "dma_device_type": 1 00:09:14.822 }, 00:09:14.822 { 00:09:14.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.822 "dma_device_type": 2 00:09:14.822 }, 00:09:14.822 { 00:09:14.822 "dma_device_id": "system", 00:09:14.822 "dma_device_type": 1 00:09:14.822 }, 00:09:14.822 { 00:09:14.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.822 "dma_device_type": 2 00:09:14.822 }, 00:09:14.822 { 00:09:14.822 "dma_device_id": "system", 00:09:14.822 "dma_device_type": 1 00:09:14.822 }, 00:09:14.822 { 00:09:14.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.822 "dma_device_type": 2 00:09:14.822 } 00:09:14.822 ], 00:09:14.822 "driver_specific": { 00:09:14.822 "raid": { 00:09:14.822 "uuid": "93efb38a-77a2-4a4f-9834-a1d46ee584cb", 00:09:14.822 "strip_size_kb": 64, 00:09:14.822 "state": "online", 00:09:14.822 "raid_level": "raid0", 00:09:14.822 "superblock": true, 00:09:14.822 "num_base_bdevs": 4, 00:09:14.822 "num_base_bdevs_discovered": 4, 00:09:14.822 "num_base_bdevs_operational": 4, 00:09:14.822 "base_bdevs_list": [ 00:09:14.822 { 00:09:14.822 "name": "NewBaseBdev", 00:09:14.822 "uuid": "81ac9c0c-3604-4386-bd18-66680e25de58", 00:09:14.822 "is_configured": true, 00:09:14.822 "data_offset": 2048, 00:09:14.822 "data_size": 63488 00:09:14.822 }, 00:09:14.822 { 00:09:14.822 "name": "BaseBdev2", 00:09:14.822 "uuid": "9bf2cc90-d48d-4d19-b36d-ba4920b13ae7", 00:09:14.822 "is_configured": true, 00:09:14.822 "data_offset": 2048, 00:09:14.822 "data_size": 63488 00:09:14.822 }, 00:09:14.822 { 00:09:14.822 "name": "BaseBdev3", 00:09:14.822 "uuid": "7cd359fb-fad4-44c1-bbf3-4a97018e03fb", 00:09:14.822 "is_configured": true, 00:09:14.822 "data_offset": 2048, 00:09:14.822 "data_size": 63488 00:09:14.822 }, 00:09:14.822 { 00:09:14.822 "name": "BaseBdev4", 00:09:14.822 "uuid": "7acf3a99-42de-43c2-aaa8-5b55eeb0cf41", 00:09:14.822 "is_configured": true, 00:09:14.822 "data_offset": 2048, 00:09:14.822 "data_size": 63488 00:09:14.822 } 00:09:14.822 ] 00:09:14.822 } 00:09:14.822 } 00:09:14.822 }' 00:09:14.822 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.082 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:15.082 BaseBdev2 00:09:15.082 BaseBdev3 00:09:15.082 BaseBdev4' 00:09:15.082 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.082 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.082 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.082 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.082 23:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:15.082 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.082 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.082 23:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.082 [2024-11-02 23:49:09.130382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.082 [2024-11-02 23:49:09.130434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.082 [2024-11-02 23:49:09.130550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.082 [2024-11-02 23:49:09.130637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.082 [2024-11-02 23:49:09.130651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80822 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80822 ']' 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80822 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80822 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80822' 00:09:15.082 killing process with pid 80822 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80822 00:09:15.082 [2024-11-02 23:49:09.165398] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.082 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80822 00:09:15.342 [2024-11-02 23:49:09.242698] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.602 23:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:15.602 00:09:15.602 real 0m9.698s 00:09:15.602 user 0m16.200s 00:09:15.602 sys 0m2.111s 00:09:15.602 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:15.602 23:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.602 ************************************ 00:09:15.602 END TEST raid_state_function_test_sb 00:09:15.602 ************************************ 00:09:15.602 23:49:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:15.602 23:49:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:15.602 23:49:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:15.602 23:49:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.602 ************************************ 00:09:15.602 START TEST raid_superblock_test 00:09:15.602 ************************************ 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81479 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81479 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81479 ']' 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:15.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:15.602 23:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.862 [2024-11-02 23:49:09.725250] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:15.862 [2024-11-02 23:49:09.725357] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81479 ] 00:09:15.862 [2024-11-02 23:49:09.883522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.862 [2024-11-02 23:49:09.926437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.121 [2024-11-02 23:49:10.006202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.121 [2024-11-02 23:49:10.006254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.689 malloc1 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.689 [2024-11-02 23:49:10.589689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:16.689 [2024-11-02 23:49:10.589842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.689 [2024-11-02 23:49:10.589889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:16.689 [2024-11-02 23:49:10.589949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.689 [2024-11-02 23:49:10.592502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.689 [2024-11-02 23:49:10.592596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:16.689 pt1 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.689 malloc2 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.689 [2024-11-02 23:49:10.628619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.689 [2024-11-02 23:49:10.628687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.689 [2024-11-02 23:49:10.628707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:16.689 [2024-11-02 23:49:10.628721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.689 [2024-11-02 23:49:10.631256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.689 [2024-11-02 23:49:10.631361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.689 pt2 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.689 malloc3 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.689 [2024-11-02 23:49:10.663535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:16.689 [2024-11-02 23:49:10.663657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.689 [2024-11-02 23:49:10.663704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:16.689 [2024-11-02 23:49:10.663759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.689 [2024-11-02 23:49:10.666284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.689 [2024-11-02 23:49:10.666384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:16.689 pt3 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.689 malloc4 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.689 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.689 [2024-11-02 23:49:10.710836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:16.689 [2024-11-02 23:49:10.710957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.690 [2024-11-02 23:49:10.711005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:16.690 [2024-11-02 23:49:10.711049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.690 [2024-11-02 23:49:10.713611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.690 [2024-11-02 23:49:10.713693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:16.690 pt4 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.690 [2024-11-02 23:49:10.722837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:16.690 [2024-11-02 23:49:10.725053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.690 [2024-11-02 23:49:10.725175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:16.690 [2024-11-02 23:49:10.725260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:16.690 [2024-11-02 23:49:10.725478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:16.690 [2024-11-02 23:49:10.725536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:16.690 [2024-11-02 23:49:10.725858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:16.690 [2024-11-02 23:49:10.726082] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:16.690 [2024-11-02 23:49:10.726131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:16.690 [2024-11-02 23:49:10.726342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.690 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.950 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.950 "name": "raid_bdev1", 00:09:16.950 "uuid": "f121c53a-1028-4b66-8549-71ca0cde2c3d", 00:09:16.950 "strip_size_kb": 64, 00:09:16.950 "state": "online", 00:09:16.950 "raid_level": "raid0", 00:09:16.950 "superblock": true, 00:09:16.950 "num_base_bdevs": 4, 00:09:16.950 "num_base_bdevs_discovered": 4, 00:09:16.950 "num_base_bdevs_operational": 4, 00:09:16.950 "base_bdevs_list": [ 00:09:16.950 { 00:09:16.950 "name": "pt1", 00:09:16.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.950 "is_configured": true, 00:09:16.950 "data_offset": 2048, 00:09:16.950 "data_size": 63488 00:09:16.950 }, 00:09:16.950 { 00:09:16.950 "name": "pt2", 00:09:16.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.950 "is_configured": true, 00:09:16.950 "data_offset": 2048, 00:09:16.950 "data_size": 63488 00:09:16.950 }, 00:09:16.950 { 00:09:16.950 "name": "pt3", 00:09:16.950 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.950 "is_configured": true, 00:09:16.950 "data_offset": 2048, 00:09:16.950 "data_size": 63488 00:09:16.950 }, 00:09:16.950 { 00:09:16.950 "name": "pt4", 00:09:16.950 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:16.950 "is_configured": true, 00:09:16.950 "data_offset": 2048, 00:09:16.950 "data_size": 63488 00:09:16.950 } 00:09:16.950 ] 00:09:16.950 }' 00:09:16.950 23:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.950 23:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.210 [2024-11-02 23:49:11.182521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.210 "name": "raid_bdev1", 00:09:17.210 "aliases": [ 00:09:17.210 "f121c53a-1028-4b66-8549-71ca0cde2c3d" 00:09:17.210 ], 00:09:17.210 "product_name": "Raid Volume", 00:09:17.210 "block_size": 512, 00:09:17.210 "num_blocks": 253952, 00:09:17.210 "uuid": "f121c53a-1028-4b66-8549-71ca0cde2c3d", 00:09:17.210 "assigned_rate_limits": { 00:09:17.210 "rw_ios_per_sec": 0, 00:09:17.210 "rw_mbytes_per_sec": 0, 00:09:17.210 "r_mbytes_per_sec": 0, 00:09:17.210 "w_mbytes_per_sec": 0 00:09:17.210 }, 00:09:17.210 "claimed": false, 00:09:17.210 "zoned": false, 00:09:17.210 "supported_io_types": { 00:09:17.210 "read": true, 00:09:17.210 "write": true, 00:09:17.210 "unmap": true, 00:09:17.210 "flush": true, 00:09:17.210 "reset": true, 00:09:17.210 "nvme_admin": false, 00:09:17.210 "nvme_io": false, 00:09:17.210 "nvme_io_md": false, 00:09:17.210 "write_zeroes": true, 00:09:17.210 "zcopy": false, 00:09:17.210 "get_zone_info": false, 00:09:17.210 "zone_management": false, 00:09:17.210 "zone_append": false, 00:09:17.210 "compare": false, 00:09:17.210 "compare_and_write": false, 00:09:17.210 "abort": false, 00:09:17.210 "seek_hole": false, 00:09:17.210 "seek_data": false, 00:09:17.210 "copy": false, 00:09:17.210 "nvme_iov_md": false 00:09:17.210 }, 00:09:17.210 "memory_domains": [ 00:09:17.210 { 00:09:17.210 "dma_device_id": "system", 00:09:17.210 "dma_device_type": 1 00:09:17.210 }, 00:09:17.210 { 00:09:17.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.210 "dma_device_type": 2 00:09:17.210 }, 00:09:17.210 { 00:09:17.210 "dma_device_id": "system", 00:09:17.210 "dma_device_type": 1 00:09:17.210 }, 00:09:17.210 { 00:09:17.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.210 "dma_device_type": 2 00:09:17.210 }, 00:09:17.210 { 00:09:17.210 "dma_device_id": "system", 00:09:17.210 "dma_device_type": 1 00:09:17.210 }, 00:09:17.210 { 00:09:17.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.210 "dma_device_type": 2 00:09:17.210 }, 00:09:17.210 { 00:09:17.210 "dma_device_id": "system", 00:09:17.210 "dma_device_type": 1 00:09:17.210 }, 00:09:17.210 { 00:09:17.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.210 "dma_device_type": 2 00:09:17.210 } 00:09:17.210 ], 00:09:17.210 "driver_specific": { 00:09:17.210 "raid": { 00:09:17.210 "uuid": "f121c53a-1028-4b66-8549-71ca0cde2c3d", 00:09:17.210 "strip_size_kb": 64, 00:09:17.210 "state": "online", 00:09:17.210 "raid_level": "raid0", 00:09:17.210 "superblock": true, 00:09:17.210 "num_base_bdevs": 4, 00:09:17.210 "num_base_bdevs_discovered": 4, 00:09:17.210 "num_base_bdevs_operational": 4, 00:09:17.210 "base_bdevs_list": [ 00:09:17.210 { 00:09:17.210 "name": "pt1", 00:09:17.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.210 "is_configured": true, 00:09:17.210 "data_offset": 2048, 00:09:17.210 "data_size": 63488 00:09:17.210 }, 00:09:17.210 { 00:09:17.210 "name": "pt2", 00:09:17.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.210 "is_configured": true, 00:09:17.210 "data_offset": 2048, 00:09:17.210 "data_size": 63488 00:09:17.210 }, 00:09:17.210 { 00:09:17.210 "name": "pt3", 00:09:17.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.210 "is_configured": true, 00:09:17.210 "data_offset": 2048, 00:09:17.210 "data_size": 63488 00:09:17.210 }, 00:09:17.210 { 00:09:17.210 "name": "pt4", 00:09:17.210 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:17.210 "is_configured": true, 00:09:17.210 "data_offset": 2048, 00:09:17.210 "data_size": 63488 00:09:17.210 } 00:09:17.210 ] 00:09:17.210 } 00:09:17.210 } 00:09:17.210 }' 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:17.210 pt2 00:09:17.210 pt3 00:09:17.210 pt4' 00:09:17.210 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.470 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.470 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.470 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.470 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:17.470 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.471 [2024-11-02 23:49:11.505837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f121c53a-1028-4b66-8549-71ca0cde2c3d 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f121c53a-1028-4b66-8549-71ca0cde2c3d ']' 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.471 [2024-11-02 23:49:11.537455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.471 [2024-11-02 23:49:11.537497] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.471 [2024-11-02 23:49:11.537593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.471 [2024-11-02 23:49:11.537690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.471 [2024-11-02 23:49:11.537704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.471 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.747 [2024-11-02 23:49:11.689236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:17.747 [2024-11-02 23:49:11.691562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:17.747 [2024-11-02 23:49:11.691636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:17.747 [2024-11-02 23:49:11.691670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:17.747 [2024-11-02 23:49:11.691729] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:17.747 [2024-11-02 23:49:11.691833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:17.747 [2024-11-02 23:49:11.691861] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:17.747 [2024-11-02 23:49:11.691881] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:17.747 [2024-11-02 23:49:11.691898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.747 [2024-11-02 23:49:11.691916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:17.747 request: 00:09:17.747 { 00:09:17.747 "name": "raid_bdev1", 00:09:17.747 "raid_level": "raid0", 00:09:17.747 "base_bdevs": [ 00:09:17.747 "malloc1", 00:09:17.747 "malloc2", 00:09:17.747 "malloc3", 00:09:17.747 "malloc4" 00:09:17.747 ], 00:09:17.747 "strip_size_kb": 64, 00:09:17.747 "superblock": false, 00:09:17.747 "method": "bdev_raid_create", 00:09:17.747 "req_id": 1 00:09:17.747 } 00:09:17.747 Got JSON-RPC error response 00:09:17.747 response: 00:09:17.747 { 00:09:17.747 "code": -17, 00:09:17.747 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:17.747 } 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.747 [2024-11-02 23:49:11.757089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:17.747 [2024-11-02 23:49:11.757149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.747 [2024-11-02 23:49:11.757176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:17.747 [2024-11-02 23:49:11.757187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.747 [2024-11-02 23:49:11.759744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.747 [2024-11-02 23:49:11.759795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:17.747 [2024-11-02 23:49:11.759881] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:17.747 [2024-11-02 23:49:11.759924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:17.747 pt1 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.747 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.747 "name": "raid_bdev1", 00:09:17.748 "uuid": "f121c53a-1028-4b66-8549-71ca0cde2c3d", 00:09:17.748 "strip_size_kb": 64, 00:09:17.748 "state": "configuring", 00:09:17.748 "raid_level": "raid0", 00:09:17.748 "superblock": true, 00:09:17.748 "num_base_bdevs": 4, 00:09:17.748 "num_base_bdevs_discovered": 1, 00:09:17.748 "num_base_bdevs_operational": 4, 00:09:17.748 "base_bdevs_list": [ 00:09:17.748 { 00:09:17.748 "name": "pt1", 00:09:17.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.748 "is_configured": true, 00:09:17.748 "data_offset": 2048, 00:09:17.748 "data_size": 63488 00:09:17.748 }, 00:09:17.748 { 00:09:17.748 "name": null, 00:09:17.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.748 "is_configured": false, 00:09:17.748 "data_offset": 2048, 00:09:17.748 "data_size": 63488 00:09:17.748 }, 00:09:17.748 { 00:09:17.748 "name": null, 00:09:17.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.748 "is_configured": false, 00:09:17.748 "data_offset": 2048, 00:09:17.748 "data_size": 63488 00:09:17.748 }, 00:09:17.748 { 00:09:17.748 "name": null, 00:09:17.748 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:17.748 "is_configured": false, 00:09:17.748 "data_offset": 2048, 00:09:17.748 "data_size": 63488 00:09:17.748 } 00:09:17.748 ] 00:09:17.748 }' 00:09:17.748 23:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.748 23:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.317 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:18.317 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.317 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.317 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.317 [2024-11-02 23:49:12.220379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.317 [2024-11-02 23:49:12.220485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.317 [2024-11-02 23:49:12.220516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:18.317 [2024-11-02 23:49:12.220529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.317 [2024-11-02 23:49:12.221102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.317 [2024-11-02 23:49:12.221131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.317 [2024-11-02 23:49:12.221247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:18.317 [2024-11-02 23:49:12.221285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.317 pt2 00:09:18.317 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.317 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.318 [2024-11-02 23:49:12.228380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.318 "name": "raid_bdev1", 00:09:18.318 "uuid": "f121c53a-1028-4b66-8549-71ca0cde2c3d", 00:09:18.318 "strip_size_kb": 64, 00:09:18.318 "state": "configuring", 00:09:18.318 "raid_level": "raid0", 00:09:18.318 "superblock": true, 00:09:18.318 "num_base_bdevs": 4, 00:09:18.318 "num_base_bdevs_discovered": 1, 00:09:18.318 "num_base_bdevs_operational": 4, 00:09:18.318 "base_bdevs_list": [ 00:09:18.318 { 00:09:18.318 "name": "pt1", 00:09:18.318 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.318 "is_configured": true, 00:09:18.318 "data_offset": 2048, 00:09:18.318 "data_size": 63488 00:09:18.318 }, 00:09:18.318 { 00:09:18.318 "name": null, 00:09:18.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.318 "is_configured": false, 00:09:18.318 "data_offset": 0, 00:09:18.318 "data_size": 63488 00:09:18.318 }, 00:09:18.318 { 00:09:18.318 "name": null, 00:09:18.318 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.318 "is_configured": false, 00:09:18.318 "data_offset": 2048, 00:09:18.318 "data_size": 63488 00:09:18.318 }, 00:09:18.318 { 00:09:18.318 "name": null, 00:09:18.318 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:18.318 "is_configured": false, 00:09:18.318 "data_offset": 2048, 00:09:18.318 "data_size": 63488 00:09:18.318 } 00:09:18.318 ] 00:09:18.318 }' 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.318 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.577 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:18.577 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.577 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.577 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.577 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.577 [2024-11-02 23:49:12.663603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.577 [2024-11-02 23:49:12.663741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.577 [2024-11-02 23:49:12.663788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:18.577 [2024-11-02 23:49:12.663805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.577 [2024-11-02 23:49:12.664313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.577 [2024-11-02 23:49:12.664346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.577 [2024-11-02 23:49:12.664446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:18.577 [2024-11-02 23:49:12.664490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.577 pt2 00:09:18.577 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.577 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.577 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.578 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:18.578 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.578 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.838 [2024-11-02 23:49:12.675497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:18.838 [2024-11-02 23:49:12.675568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.838 [2024-11-02 23:49:12.675591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:18.838 [2024-11-02 23:49:12.675605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.838 [2024-11-02 23:49:12.676066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.838 [2024-11-02 23:49:12.676097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:18.838 [2024-11-02 23:49:12.676168] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:18.838 [2024-11-02 23:49:12.676193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:18.838 pt3 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.838 [2024-11-02 23:49:12.687474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:18.838 [2024-11-02 23:49:12.687542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.838 [2024-11-02 23:49:12.687560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:18.838 [2024-11-02 23:49:12.687572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.838 [2024-11-02 23:49:12.687931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.838 [2024-11-02 23:49:12.687962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:18.838 [2024-11-02 23:49:12.688026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:18.838 [2024-11-02 23:49:12.688048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:18.838 [2024-11-02 23:49:12.688159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:18.838 [2024-11-02 23:49:12.688181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:18.838 [2024-11-02 23:49:12.688442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:18.838 [2024-11-02 23:49:12.688583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:18.838 [2024-11-02 23:49:12.688599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:18.838 [2024-11-02 23:49:12.688713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.838 pt4 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.838 "name": "raid_bdev1", 00:09:18.838 "uuid": "f121c53a-1028-4b66-8549-71ca0cde2c3d", 00:09:18.838 "strip_size_kb": 64, 00:09:18.838 "state": "online", 00:09:18.838 "raid_level": "raid0", 00:09:18.838 "superblock": true, 00:09:18.838 "num_base_bdevs": 4, 00:09:18.838 "num_base_bdevs_discovered": 4, 00:09:18.838 "num_base_bdevs_operational": 4, 00:09:18.838 "base_bdevs_list": [ 00:09:18.838 { 00:09:18.838 "name": "pt1", 00:09:18.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.838 "is_configured": true, 00:09:18.838 "data_offset": 2048, 00:09:18.838 "data_size": 63488 00:09:18.838 }, 00:09:18.838 { 00:09:18.838 "name": "pt2", 00:09:18.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.838 "is_configured": true, 00:09:18.838 "data_offset": 2048, 00:09:18.838 "data_size": 63488 00:09:18.838 }, 00:09:18.838 { 00:09:18.838 "name": "pt3", 00:09:18.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.838 "is_configured": true, 00:09:18.838 "data_offset": 2048, 00:09:18.838 "data_size": 63488 00:09:18.838 }, 00:09:18.838 { 00:09:18.838 "name": "pt4", 00:09:18.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:18.838 "is_configured": true, 00:09:18.838 "data_offset": 2048, 00:09:18.838 "data_size": 63488 00:09:18.838 } 00:09:18.838 ] 00:09:18.838 }' 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.838 23:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.098 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:19.098 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:19.098 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.098 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.098 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.098 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.098 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.098 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.099 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.099 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.099 [2024-11-02 23:49:13.167162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.099 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.358 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.358 "name": "raid_bdev1", 00:09:19.358 "aliases": [ 00:09:19.358 "f121c53a-1028-4b66-8549-71ca0cde2c3d" 00:09:19.358 ], 00:09:19.358 "product_name": "Raid Volume", 00:09:19.358 "block_size": 512, 00:09:19.358 "num_blocks": 253952, 00:09:19.358 "uuid": "f121c53a-1028-4b66-8549-71ca0cde2c3d", 00:09:19.358 "assigned_rate_limits": { 00:09:19.358 "rw_ios_per_sec": 0, 00:09:19.358 "rw_mbytes_per_sec": 0, 00:09:19.358 "r_mbytes_per_sec": 0, 00:09:19.358 "w_mbytes_per_sec": 0 00:09:19.358 }, 00:09:19.358 "claimed": false, 00:09:19.358 "zoned": false, 00:09:19.358 "supported_io_types": { 00:09:19.358 "read": true, 00:09:19.358 "write": true, 00:09:19.358 "unmap": true, 00:09:19.358 "flush": true, 00:09:19.358 "reset": true, 00:09:19.358 "nvme_admin": false, 00:09:19.358 "nvme_io": false, 00:09:19.358 "nvme_io_md": false, 00:09:19.358 "write_zeroes": true, 00:09:19.358 "zcopy": false, 00:09:19.358 "get_zone_info": false, 00:09:19.358 "zone_management": false, 00:09:19.358 "zone_append": false, 00:09:19.358 "compare": false, 00:09:19.358 "compare_and_write": false, 00:09:19.359 "abort": false, 00:09:19.359 "seek_hole": false, 00:09:19.359 "seek_data": false, 00:09:19.359 "copy": false, 00:09:19.359 "nvme_iov_md": false 00:09:19.359 }, 00:09:19.359 "memory_domains": [ 00:09:19.359 { 00:09:19.359 "dma_device_id": "system", 00:09:19.359 "dma_device_type": 1 00:09:19.359 }, 00:09:19.359 { 00:09:19.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.359 "dma_device_type": 2 00:09:19.359 }, 00:09:19.359 { 00:09:19.359 "dma_device_id": "system", 00:09:19.359 "dma_device_type": 1 00:09:19.359 }, 00:09:19.359 { 00:09:19.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.359 "dma_device_type": 2 00:09:19.359 }, 00:09:19.359 { 00:09:19.359 "dma_device_id": "system", 00:09:19.359 "dma_device_type": 1 00:09:19.359 }, 00:09:19.359 { 00:09:19.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.359 "dma_device_type": 2 00:09:19.359 }, 00:09:19.359 { 00:09:19.359 "dma_device_id": "system", 00:09:19.359 "dma_device_type": 1 00:09:19.359 }, 00:09:19.359 { 00:09:19.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.359 "dma_device_type": 2 00:09:19.359 } 00:09:19.359 ], 00:09:19.359 "driver_specific": { 00:09:19.359 "raid": { 00:09:19.359 "uuid": "f121c53a-1028-4b66-8549-71ca0cde2c3d", 00:09:19.359 "strip_size_kb": 64, 00:09:19.359 "state": "online", 00:09:19.359 "raid_level": "raid0", 00:09:19.359 "superblock": true, 00:09:19.359 "num_base_bdevs": 4, 00:09:19.359 "num_base_bdevs_discovered": 4, 00:09:19.359 "num_base_bdevs_operational": 4, 00:09:19.359 "base_bdevs_list": [ 00:09:19.359 { 00:09:19.359 "name": "pt1", 00:09:19.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.359 "is_configured": true, 00:09:19.359 "data_offset": 2048, 00:09:19.359 "data_size": 63488 00:09:19.359 }, 00:09:19.359 { 00:09:19.359 "name": "pt2", 00:09:19.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.359 "is_configured": true, 00:09:19.359 "data_offset": 2048, 00:09:19.359 "data_size": 63488 00:09:19.359 }, 00:09:19.359 { 00:09:19.359 "name": "pt3", 00:09:19.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.359 "is_configured": true, 00:09:19.359 "data_offset": 2048, 00:09:19.359 "data_size": 63488 00:09:19.359 }, 00:09:19.359 { 00:09:19.359 "name": "pt4", 00:09:19.359 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:19.359 "is_configured": true, 00:09:19.359 "data_offset": 2048, 00:09:19.359 "data_size": 63488 00:09:19.359 } 00:09:19.359 ] 00:09:19.359 } 00:09:19.359 } 00:09:19.359 }' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:19.359 pt2 00:09:19.359 pt3 00:09:19.359 pt4' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.359 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.619 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.619 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.619 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.619 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.619 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:19.619 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.619 [2024-11-02 23:49:13.478509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.619 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.619 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f121c53a-1028-4b66-8549-71ca0cde2c3d '!=' f121c53a-1028-4b66-8549-71ca0cde2c3d ']' 00:09:19.619 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81479 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81479 ']' 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81479 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81479 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:19.620 killing process with pid 81479 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81479' 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 81479 00:09:19.620 [2024-11-02 23:49:13.560484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.620 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 81479 00:09:19.620 [2024-11-02 23:49:13.560643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.620 [2024-11-02 23:49:13.560759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.620 [2024-11-02 23:49:13.560775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:19.620 [2024-11-02 23:49:13.642399] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.188 23:49:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:20.188 00:09:20.188 real 0m4.329s 00:09:20.188 user 0m6.651s 00:09:20.188 sys 0m1.020s 00:09:20.188 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:20.188 23:49:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.188 ************************************ 00:09:20.188 END TEST raid_superblock_test 00:09:20.188 ************************************ 00:09:20.188 23:49:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:20.188 23:49:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:20.188 23:49:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:20.188 23:49:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.188 ************************************ 00:09:20.188 START TEST raid_read_error_test 00:09:20.188 ************************************ 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dpE3vmN9tW 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81728 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81728 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 81728 ']' 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:20.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:20.188 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.188 [2024-11-02 23:49:14.160285] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:20.188 [2024-11-02 23:49:14.160440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81728 ] 00:09:20.448 [2024-11-02 23:49:14.319908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.448 [2024-11-02 23:49:14.372180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.448 [2024-11-02 23:49:14.455249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.448 [2024-11-02 23:49:14.455299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.039 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:21.039 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:21.039 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.039 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:21.039 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.039 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.039 BaseBdev1_malloc 00:09:21.039 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.039 23:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:21.039 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.039 23:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.039 true 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.039 [2024-11-02 23:49:15.012417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:21.039 [2024-11-02 23:49:15.012494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.039 [2024-11-02 23:49:15.012524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:21.039 [2024-11-02 23:49:15.012544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.039 [2024-11-02 23:49:15.015216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.039 [2024-11-02 23:49:15.015267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:21.039 BaseBdev1 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.039 BaseBdev2_malloc 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.039 true 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.039 [2024-11-02 23:49:15.059600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:21.039 [2024-11-02 23:49:15.059669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.039 [2024-11-02 23:49:15.059694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:21.039 [2024-11-02 23:49:15.059717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.039 [2024-11-02 23:49:15.062136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.039 [2024-11-02 23:49:15.062183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:21.039 BaseBdev2 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.039 BaseBdev3_malloc 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.039 true 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.039 [2024-11-02 23:49:15.106555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:21.039 [2024-11-02 23:49:15.106616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.039 [2024-11-02 23:49:15.106639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:21.039 [2024-11-02 23:49:15.106651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.039 [2024-11-02 23:49:15.109063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.039 [2024-11-02 23:49:15.109104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:21.039 BaseBdev3 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.039 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.298 BaseBdev4_malloc 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.298 true 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.298 [2024-11-02 23:49:15.161477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:21.298 [2024-11-02 23:49:15.161543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.298 [2024-11-02 23:49:15.161574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:21.298 [2024-11-02 23:49:15.161585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.298 [2024-11-02 23:49:15.163981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.298 [2024-11-02 23:49:15.164022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:21.298 BaseBdev4 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.298 [2024-11-02 23:49:15.173518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.298 [2024-11-02 23:49:15.175733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.298 [2024-11-02 23:49:15.175837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.298 [2024-11-02 23:49:15.175911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:21.298 [2024-11-02 23:49:15.176133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:21.298 [2024-11-02 23:49:15.176154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:21.298 [2024-11-02 23:49:15.176433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:21.298 [2024-11-02 23:49:15.176605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:21.298 [2024-11-02 23:49:15.176628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:21.298 [2024-11-02 23:49:15.176782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.298 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.299 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.299 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.299 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.299 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.299 "name": "raid_bdev1", 00:09:21.299 "uuid": "db647676-a285-44e1-80af-f6988b956fef", 00:09:21.299 "strip_size_kb": 64, 00:09:21.299 "state": "online", 00:09:21.299 "raid_level": "raid0", 00:09:21.299 "superblock": true, 00:09:21.299 "num_base_bdevs": 4, 00:09:21.299 "num_base_bdevs_discovered": 4, 00:09:21.299 "num_base_bdevs_operational": 4, 00:09:21.299 "base_bdevs_list": [ 00:09:21.299 { 00:09:21.299 "name": "BaseBdev1", 00:09:21.299 "uuid": "24478fbc-c580-56a5-88a4-7a17d752cc26", 00:09:21.299 "is_configured": true, 00:09:21.299 "data_offset": 2048, 00:09:21.299 "data_size": 63488 00:09:21.299 }, 00:09:21.299 { 00:09:21.299 "name": "BaseBdev2", 00:09:21.299 "uuid": "a954fb8c-69f2-57a9-bd3d-d2fccbe1204d", 00:09:21.299 "is_configured": true, 00:09:21.299 "data_offset": 2048, 00:09:21.299 "data_size": 63488 00:09:21.299 }, 00:09:21.299 { 00:09:21.299 "name": "BaseBdev3", 00:09:21.299 "uuid": "4686dc94-34d6-582d-b3d1-f5f58cac001a", 00:09:21.299 "is_configured": true, 00:09:21.299 "data_offset": 2048, 00:09:21.299 "data_size": 63488 00:09:21.299 }, 00:09:21.299 { 00:09:21.299 "name": "BaseBdev4", 00:09:21.299 "uuid": "70c46012-b156-5b49-aa38-ed6b9c0e7307", 00:09:21.299 "is_configured": true, 00:09:21.299 "data_offset": 2048, 00:09:21.299 "data_size": 63488 00:09:21.299 } 00:09:21.299 ] 00:09:21.299 }' 00:09:21.299 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.299 23:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.556 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:21.556 23:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:21.814 [2024-11-02 23:49:15.709222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.752 "name": "raid_bdev1", 00:09:22.752 "uuid": "db647676-a285-44e1-80af-f6988b956fef", 00:09:22.752 "strip_size_kb": 64, 00:09:22.752 "state": "online", 00:09:22.752 "raid_level": "raid0", 00:09:22.752 "superblock": true, 00:09:22.752 "num_base_bdevs": 4, 00:09:22.752 "num_base_bdevs_discovered": 4, 00:09:22.752 "num_base_bdevs_operational": 4, 00:09:22.752 "base_bdevs_list": [ 00:09:22.752 { 00:09:22.752 "name": "BaseBdev1", 00:09:22.752 "uuid": "24478fbc-c580-56a5-88a4-7a17d752cc26", 00:09:22.752 "is_configured": true, 00:09:22.752 "data_offset": 2048, 00:09:22.752 "data_size": 63488 00:09:22.752 }, 00:09:22.752 { 00:09:22.752 "name": "BaseBdev2", 00:09:22.752 "uuid": "a954fb8c-69f2-57a9-bd3d-d2fccbe1204d", 00:09:22.752 "is_configured": true, 00:09:22.752 "data_offset": 2048, 00:09:22.752 "data_size": 63488 00:09:22.752 }, 00:09:22.752 { 00:09:22.752 "name": "BaseBdev3", 00:09:22.752 "uuid": "4686dc94-34d6-582d-b3d1-f5f58cac001a", 00:09:22.752 "is_configured": true, 00:09:22.752 "data_offset": 2048, 00:09:22.752 "data_size": 63488 00:09:22.752 }, 00:09:22.752 { 00:09:22.752 "name": "BaseBdev4", 00:09:22.752 "uuid": "70c46012-b156-5b49-aa38-ed6b9c0e7307", 00:09:22.752 "is_configured": true, 00:09:22.752 "data_offset": 2048, 00:09:22.752 "data_size": 63488 00:09:22.752 } 00:09:22.752 ] 00:09:22.752 }' 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.752 23:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.011 23:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:23.011 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.011 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.011 [2024-11-02 23:49:17.089490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.011 [2024-11-02 23:49:17.089530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.011 [2024-11-02 23:49:17.092245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.011 [2024-11-02 23:49:17.092302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.011 [2024-11-02 23:49:17.092347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.011 [2024-11-02 23:49:17.092356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:23.011 { 00:09:23.011 "results": [ 00:09:23.011 { 00:09:23.011 "job": "raid_bdev1", 00:09:23.011 "core_mask": "0x1", 00:09:23.011 "workload": "randrw", 00:09:23.011 "percentage": 50, 00:09:23.011 "status": "finished", 00:09:23.011 "queue_depth": 1, 00:09:23.011 "io_size": 131072, 00:09:23.011 "runtime": 1.380528, 00:09:23.011 "iops": 15036.275975568768, 00:09:23.011 "mibps": 1879.534496946096, 00:09:23.011 "io_failed": 1, 00:09:23.011 "io_timeout": 0, 00:09:23.011 "avg_latency_us": 92.72886549339046, 00:09:23.011 "min_latency_us": 25.7117903930131, 00:09:23.011 "max_latency_us": 1538.235807860262 00:09:23.011 } 00:09:23.011 ], 00:09:23.011 "core_count": 1 00:09:23.011 } 00:09:23.011 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.011 23:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81728 00:09:23.011 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 81728 ']' 00:09:23.011 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 81728 00:09:23.011 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:23.011 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:23.270 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81728 00:09:23.270 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:23.270 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:23.270 killing process with pid 81728 00:09:23.270 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81728' 00:09:23.270 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 81728 00:09:23.270 [2024-11-02 23:49:17.126496] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.270 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 81728 00:09:23.270 [2024-11-02 23:49:17.164235] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.529 23:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dpE3vmN9tW 00:09:23.529 23:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:23.529 23:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:23.529 23:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:23.529 23:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:23.529 23:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:23.529 23:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:23.529 23:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:23.529 00:09:23.529 real 0m3.333s 00:09:23.529 user 0m4.098s 00:09:23.529 sys 0m0.645s 00:09:23.529 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:23.529 23:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.529 ************************************ 00:09:23.529 END TEST raid_read_error_test 00:09:23.529 ************************************ 00:09:23.529 23:49:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:23.529 23:49:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:23.530 23:49:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:23.530 23:49:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.530 ************************************ 00:09:23.530 START TEST raid_write_error_test 00:09:23.530 ************************************ 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iir2O6LH5y 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81857 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81857 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 81857 ']' 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:23.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:23.530 23:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.530 [2024-11-02 23:49:17.565354] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:23.530 [2024-11-02 23:49:17.566008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81857 ] 00:09:23.789 [2024-11-02 23:49:17.703391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.789 [2024-11-02 23:49:17.738830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.789 [2024-11-02 23:49:17.793247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.789 [2024-11-02 23:49:17.793294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.358 BaseBdev1_malloc 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.358 true 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.358 [2024-11-02 23:49:18.413951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:24.358 [2024-11-02 23:49:18.414010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.358 [2024-11-02 23:49:18.414031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:24.358 [2024-11-02 23:49:18.414047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.358 [2024-11-02 23:49:18.416208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.358 [2024-11-02 23:49:18.416247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:24.358 BaseBdev1 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.358 BaseBdev2_malloc 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.358 true 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.358 [2024-11-02 23:49:18.442412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:24.358 [2024-11-02 23:49:18.442464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.358 [2024-11-02 23:49:18.442483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:24.358 [2024-11-02 23:49:18.442501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.358 [2024-11-02 23:49:18.444553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.358 [2024-11-02 23:49:18.444587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:24.358 BaseBdev2 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.358 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.618 BaseBdev3_malloc 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.618 true 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.618 [2024-11-02 23:49:18.482695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:24.618 [2024-11-02 23:49:18.482753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.618 [2024-11-02 23:49:18.482772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:24.618 [2024-11-02 23:49:18.482782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.618 [2024-11-02 23:49:18.484806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.618 [2024-11-02 23:49:18.484836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:24.618 BaseBdev3 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.618 BaseBdev4_malloc 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.618 true 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.618 [2024-11-02 23:49:18.530340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:24.618 [2024-11-02 23:49:18.530393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.618 [2024-11-02 23:49:18.530417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:24.618 [2024-11-02 23:49:18.530425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.618 [2024-11-02 23:49:18.532452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.618 [2024-11-02 23:49:18.532487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:24.618 BaseBdev4 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.618 [2024-11-02 23:49:18.542372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.618 [2024-11-02 23:49:18.544202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.618 [2024-11-02 23:49:18.544277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.618 [2024-11-02 23:49:18.544338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:24.618 [2024-11-02 23:49:18.544525] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:24.618 [2024-11-02 23:49:18.544543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:24.618 [2024-11-02 23:49:18.544808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:24.618 [2024-11-02 23:49:18.544964] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:24.618 [2024-11-02 23:49:18.544983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:24.618 [2024-11-02 23:49:18.545103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.618 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.619 "name": "raid_bdev1", 00:09:24.619 "uuid": "50b4b22b-8968-4ea7-b4b6-bbd8f02a529b", 00:09:24.619 "strip_size_kb": 64, 00:09:24.619 "state": "online", 00:09:24.619 "raid_level": "raid0", 00:09:24.619 "superblock": true, 00:09:24.619 "num_base_bdevs": 4, 00:09:24.619 "num_base_bdevs_discovered": 4, 00:09:24.619 "num_base_bdevs_operational": 4, 00:09:24.619 "base_bdevs_list": [ 00:09:24.619 { 00:09:24.619 "name": "BaseBdev1", 00:09:24.619 "uuid": "98474a4d-2098-5bdd-bde1-8ee737909927", 00:09:24.619 "is_configured": true, 00:09:24.619 "data_offset": 2048, 00:09:24.619 "data_size": 63488 00:09:24.619 }, 00:09:24.619 { 00:09:24.619 "name": "BaseBdev2", 00:09:24.619 "uuid": "9e36e7fd-0136-5221-bbc7-fb84d5d8a5f8", 00:09:24.619 "is_configured": true, 00:09:24.619 "data_offset": 2048, 00:09:24.619 "data_size": 63488 00:09:24.619 }, 00:09:24.619 { 00:09:24.619 "name": "BaseBdev3", 00:09:24.619 "uuid": "2d2a1bb5-319a-5bc7-8959-8671ec3e57a8", 00:09:24.619 "is_configured": true, 00:09:24.619 "data_offset": 2048, 00:09:24.619 "data_size": 63488 00:09:24.619 }, 00:09:24.619 { 00:09:24.619 "name": "BaseBdev4", 00:09:24.619 "uuid": "49d2b0bb-1297-501e-928c-563770052380", 00:09:24.619 "is_configured": true, 00:09:24.619 "data_offset": 2048, 00:09:24.619 "data_size": 63488 00:09:24.619 } 00:09:24.619 ] 00:09:24.619 }' 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.619 23:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.188 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:25.188 23:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:25.188 [2024-11-02 23:49:19.089827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:26.126 23:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.126 "name": "raid_bdev1", 00:09:26.126 "uuid": "50b4b22b-8968-4ea7-b4b6-bbd8f02a529b", 00:09:26.126 "strip_size_kb": 64, 00:09:26.126 "state": "online", 00:09:26.126 "raid_level": "raid0", 00:09:26.126 "superblock": true, 00:09:26.126 "num_base_bdevs": 4, 00:09:26.126 "num_base_bdevs_discovered": 4, 00:09:26.126 "num_base_bdevs_operational": 4, 00:09:26.126 "base_bdevs_list": [ 00:09:26.126 { 00:09:26.126 "name": "BaseBdev1", 00:09:26.126 "uuid": "98474a4d-2098-5bdd-bde1-8ee737909927", 00:09:26.126 "is_configured": true, 00:09:26.126 "data_offset": 2048, 00:09:26.126 "data_size": 63488 00:09:26.126 }, 00:09:26.126 { 00:09:26.126 "name": "BaseBdev2", 00:09:26.126 "uuid": "9e36e7fd-0136-5221-bbc7-fb84d5d8a5f8", 00:09:26.126 "is_configured": true, 00:09:26.126 "data_offset": 2048, 00:09:26.126 "data_size": 63488 00:09:26.126 }, 00:09:26.126 { 00:09:26.126 "name": "BaseBdev3", 00:09:26.126 "uuid": "2d2a1bb5-319a-5bc7-8959-8671ec3e57a8", 00:09:26.126 "is_configured": true, 00:09:26.126 "data_offset": 2048, 00:09:26.126 "data_size": 63488 00:09:26.126 }, 00:09:26.126 { 00:09:26.126 "name": "BaseBdev4", 00:09:26.126 "uuid": "49d2b0bb-1297-501e-928c-563770052380", 00:09:26.126 "is_configured": true, 00:09:26.126 "data_offset": 2048, 00:09:26.126 "data_size": 63488 00:09:26.126 } 00:09:26.126 ] 00:09:26.126 }' 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.126 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.696 [2024-11-02 23:49:20.493826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.696 [2024-11-02 23:49:20.493862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.696 [2024-11-02 23:49:20.496431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.696 [2024-11-02 23:49:20.496485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.696 [2024-11-02 23:49:20.496530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.696 [2024-11-02 23:49:20.496538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:26.696 { 00:09:26.696 "results": [ 00:09:26.696 { 00:09:26.696 "job": "raid_bdev1", 00:09:26.696 "core_mask": "0x1", 00:09:26.696 "workload": "randrw", 00:09:26.696 "percentage": 50, 00:09:26.696 "status": "finished", 00:09:26.696 "queue_depth": 1, 00:09:26.696 "io_size": 131072, 00:09:26.696 "runtime": 1.404861, 00:09:26.696 "iops": 16522.631064567955, 00:09:26.696 "mibps": 2065.3288830709944, 00:09:26.696 "io_failed": 1, 00:09:26.696 "io_timeout": 0, 00:09:26.696 "avg_latency_us": 83.886214489434, 00:09:26.696 "min_latency_us": 25.152838427947597, 00:09:26.696 "max_latency_us": 1588.317903930131 00:09:26.696 } 00:09:26.696 ], 00:09:26.696 "core_count": 1 00:09:26.696 } 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81857 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 81857 ']' 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 81857 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81857 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:26.696 killing process with pid 81857 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81857' 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 81857 00:09:26.696 [2024-11-02 23:49:20.540129] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 81857 00:09:26.696 [2024-11-02 23:49:20.574594] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iir2O6LH5y 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:26.696 00:09:26.696 real 0m3.333s 00:09:26.696 user 0m4.202s 00:09:26.696 sys 0m0.586s 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.696 23:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.696 ************************************ 00:09:26.696 END TEST raid_write_error_test 00:09:26.696 ************************************ 00:09:26.956 23:49:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:26.956 23:49:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:26.956 23:49:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:26.956 23:49:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.956 23:49:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.956 ************************************ 00:09:26.956 START TEST raid_state_function_test 00:09:26.956 ************************************ 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:26.956 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81990 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:26.957 Process raid pid: 81990 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81990' 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81990 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 81990 ']' 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:26.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:26.957 23:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.957 [2024-11-02 23:49:20.951658] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:26.957 [2024-11-02 23:49:20.951811] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.217 [2024-11-02 23:49:21.085059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.217 [2024-11-02 23:49:21.110067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.217 [2024-11-02 23:49:21.151871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.217 [2024-11-02 23:49:21.151907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.786 23:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.787 [2024-11-02 23:49:21.776580] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.787 [2024-11-02 23:49:21.776637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.787 [2024-11-02 23:49:21.776647] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.787 [2024-11-02 23:49:21.776674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.787 [2024-11-02 23:49:21.776680] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.787 [2024-11-02 23:49:21.776691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.787 [2024-11-02 23:49:21.776697] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:27.787 [2024-11-02 23:49:21.776705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.787 "name": "Existed_Raid", 00:09:27.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.787 "strip_size_kb": 64, 00:09:27.787 "state": "configuring", 00:09:27.787 "raid_level": "concat", 00:09:27.787 "superblock": false, 00:09:27.787 "num_base_bdevs": 4, 00:09:27.787 "num_base_bdevs_discovered": 0, 00:09:27.787 "num_base_bdevs_operational": 4, 00:09:27.787 "base_bdevs_list": [ 00:09:27.787 { 00:09:27.787 "name": "BaseBdev1", 00:09:27.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.787 "is_configured": false, 00:09:27.787 "data_offset": 0, 00:09:27.787 "data_size": 0 00:09:27.787 }, 00:09:27.787 { 00:09:27.787 "name": "BaseBdev2", 00:09:27.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.787 "is_configured": false, 00:09:27.787 "data_offset": 0, 00:09:27.787 "data_size": 0 00:09:27.787 }, 00:09:27.787 { 00:09:27.787 "name": "BaseBdev3", 00:09:27.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.787 "is_configured": false, 00:09:27.787 "data_offset": 0, 00:09:27.787 "data_size": 0 00:09:27.787 }, 00:09:27.787 { 00:09:27.787 "name": "BaseBdev4", 00:09:27.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.787 "is_configured": false, 00:09:27.787 "data_offset": 0, 00:09:27.787 "data_size": 0 00:09:27.787 } 00:09:27.787 ] 00:09:27.787 }' 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.787 23:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.356 [2024-11-02 23:49:22.267661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.356 [2024-11-02 23:49:22.267797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.356 [2024-11-02 23:49:22.275655] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.356 [2024-11-02 23:49:22.275751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.356 [2024-11-02 23:49:22.275780] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.356 [2024-11-02 23:49:22.275802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.356 [2024-11-02 23:49:22.275820] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.356 [2024-11-02 23:49:22.275840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.356 [2024-11-02 23:49:22.275880] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:28.356 [2024-11-02 23:49:22.275916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.356 [2024-11-02 23:49:22.292362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.356 BaseBdev1 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.356 [ 00:09:28.356 { 00:09:28.356 "name": "BaseBdev1", 00:09:28.356 "aliases": [ 00:09:28.356 "53152bf3-093e-4a2d-8cbb-5b8448107f2b" 00:09:28.356 ], 00:09:28.356 "product_name": "Malloc disk", 00:09:28.356 "block_size": 512, 00:09:28.356 "num_blocks": 65536, 00:09:28.356 "uuid": "53152bf3-093e-4a2d-8cbb-5b8448107f2b", 00:09:28.356 "assigned_rate_limits": { 00:09:28.356 "rw_ios_per_sec": 0, 00:09:28.356 "rw_mbytes_per_sec": 0, 00:09:28.356 "r_mbytes_per_sec": 0, 00:09:28.356 "w_mbytes_per_sec": 0 00:09:28.356 }, 00:09:28.356 "claimed": true, 00:09:28.356 "claim_type": "exclusive_write", 00:09:28.356 "zoned": false, 00:09:28.356 "supported_io_types": { 00:09:28.356 "read": true, 00:09:28.356 "write": true, 00:09:28.356 "unmap": true, 00:09:28.356 "flush": true, 00:09:28.356 "reset": true, 00:09:28.356 "nvme_admin": false, 00:09:28.356 "nvme_io": false, 00:09:28.356 "nvme_io_md": false, 00:09:28.356 "write_zeroes": true, 00:09:28.356 "zcopy": true, 00:09:28.356 "get_zone_info": false, 00:09:28.356 "zone_management": false, 00:09:28.356 "zone_append": false, 00:09:28.356 "compare": false, 00:09:28.356 "compare_and_write": false, 00:09:28.356 "abort": true, 00:09:28.356 "seek_hole": false, 00:09:28.356 "seek_data": false, 00:09:28.356 "copy": true, 00:09:28.356 "nvme_iov_md": false 00:09:28.356 }, 00:09:28.356 "memory_domains": [ 00:09:28.356 { 00:09:28.356 "dma_device_id": "system", 00:09:28.356 "dma_device_type": 1 00:09:28.356 }, 00:09:28.356 { 00:09:28.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.356 "dma_device_type": 2 00:09:28.356 } 00:09:28.356 ], 00:09:28.356 "driver_specific": {} 00:09:28.356 } 00:09:28.356 ] 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.356 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.357 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.357 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.357 "name": "Existed_Raid", 00:09:28.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.357 "strip_size_kb": 64, 00:09:28.357 "state": "configuring", 00:09:28.357 "raid_level": "concat", 00:09:28.357 "superblock": false, 00:09:28.357 "num_base_bdevs": 4, 00:09:28.357 "num_base_bdevs_discovered": 1, 00:09:28.357 "num_base_bdevs_operational": 4, 00:09:28.357 "base_bdevs_list": [ 00:09:28.357 { 00:09:28.357 "name": "BaseBdev1", 00:09:28.357 "uuid": "53152bf3-093e-4a2d-8cbb-5b8448107f2b", 00:09:28.357 "is_configured": true, 00:09:28.357 "data_offset": 0, 00:09:28.357 "data_size": 65536 00:09:28.357 }, 00:09:28.357 { 00:09:28.357 "name": "BaseBdev2", 00:09:28.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.357 "is_configured": false, 00:09:28.357 "data_offset": 0, 00:09:28.357 "data_size": 0 00:09:28.357 }, 00:09:28.357 { 00:09:28.357 "name": "BaseBdev3", 00:09:28.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.357 "is_configured": false, 00:09:28.357 "data_offset": 0, 00:09:28.357 "data_size": 0 00:09:28.357 }, 00:09:28.357 { 00:09:28.357 "name": "BaseBdev4", 00:09:28.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.357 "is_configured": false, 00:09:28.357 "data_offset": 0, 00:09:28.357 "data_size": 0 00:09:28.357 } 00:09:28.357 ] 00:09:28.357 }' 00:09:28.357 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.357 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.926 [2024-11-02 23:49:22.819494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.926 [2024-11-02 23:49:22.819602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.926 [2024-11-02 23:49:22.827523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.926 [2024-11-02 23:49:22.829429] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.926 [2024-11-02 23:49:22.829502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.926 [2024-11-02 23:49:22.829530] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.926 [2024-11-02 23:49:22.829551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.926 [2024-11-02 23:49:22.829569] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:28.926 [2024-11-02 23:49:22.829588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.926 "name": "Existed_Raid", 00:09:28.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.926 "strip_size_kb": 64, 00:09:28.926 "state": "configuring", 00:09:28.926 "raid_level": "concat", 00:09:28.926 "superblock": false, 00:09:28.926 "num_base_bdevs": 4, 00:09:28.926 "num_base_bdevs_discovered": 1, 00:09:28.926 "num_base_bdevs_operational": 4, 00:09:28.926 "base_bdevs_list": [ 00:09:28.926 { 00:09:28.926 "name": "BaseBdev1", 00:09:28.926 "uuid": "53152bf3-093e-4a2d-8cbb-5b8448107f2b", 00:09:28.926 "is_configured": true, 00:09:28.926 "data_offset": 0, 00:09:28.926 "data_size": 65536 00:09:28.926 }, 00:09:28.926 { 00:09:28.926 "name": "BaseBdev2", 00:09:28.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.926 "is_configured": false, 00:09:28.926 "data_offset": 0, 00:09:28.926 "data_size": 0 00:09:28.926 }, 00:09:28.926 { 00:09:28.926 "name": "BaseBdev3", 00:09:28.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.926 "is_configured": false, 00:09:28.926 "data_offset": 0, 00:09:28.926 "data_size": 0 00:09:28.926 }, 00:09:28.926 { 00:09:28.926 "name": "BaseBdev4", 00:09:28.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.926 "is_configured": false, 00:09:28.926 "data_offset": 0, 00:09:28.926 "data_size": 0 00:09:28.926 } 00:09:28.926 ] 00:09:28.926 }' 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.926 23:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.185 [2024-11-02 23:49:23.265623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.185 BaseBdev2 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.185 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.445 [ 00:09:29.445 { 00:09:29.445 "name": "BaseBdev2", 00:09:29.445 "aliases": [ 00:09:29.445 "36946398-fd0f-45f3-9df5-8bafd49c2bca" 00:09:29.445 ], 00:09:29.445 "product_name": "Malloc disk", 00:09:29.445 "block_size": 512, 00:09:29.445 "num_blocks": 65536, 00:09:29.445 "uuid": "36946398-fd0f-45f3-9df5-8bafd49c2bca", 00:09:29.445 "assigned_rate_limits": { 00:09:29.445 "rw_ios_per_sec": 0, 00:09:29.445 "rw_mbytes_per_sec": 0, 00:09:29.445 "r_mbytes_per_sec": 0, 00:09:29.445 "w_mbytes_per_sec": 0 00:09:29.445 }, 00:09:29.445 "claimed": true, 00:09:29.445 "claim_type": "exclusive_write", 00:09:29.445 "zoned": false, 00:09:29.445 "supported_io_types": { 00:09:29.445 "read": true, 00:09:29.445 "write": true, 00:09:29.445 "unmap": true, 00:09:29.445 "flush": true, 00:09:29.445 "reset": true, 00:09:29.445 "nvme_admin": false, 00:09:29.445 "nvme_io": false, 00:09:29.445 "nvme_io_md": false, 00:09:29.445 "write_zeroes": true, 00:09:29.445 "zcopy": true, 00:09:29.445 "get_zone_info": false, 00:09:29.445 "zone_management": false, 00:09:29.445 "zone_append": false, 00:09:29.445 "compare": false, 00:09:29.445 "compare_and_write": false, 00:09:29.445 "abort": true, 00:09:29.445 "seek_hole": false, 00:09:29.445 "seek_data": false, 00:09:29.445 "copy": true, 00:09:29.445 "nvme_iov_md": false 00:09:29.445 }, 00:09:29.445 "memory_domains": [ 00:09:29.445 { 00:09:29.445 "dma_device_id": "system", 00:09:29.445 "dma_device_type": 1 00:09:29.445 }, 00:09:29.445 { 00:09:29.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.445 "dma_device_type": 2 00:09:29.445 } 00:09:29.445 ], 00:09:29.445 "driver_specific": {} 00:09:29.445 } 00:09:29.445 ] 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.445 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.445 "name": "Existed_Raid", 00:09:29.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.445 "strip_size_kb": 64, 00:09:29.445 "state": "configuring", 00:09:29.445 "raid_level": "concat", 00:09:29.445 "superblock": false, 00:09:29.445 "num_base_bdevs": 4, 00:09:29.445 "num_base_bdevs_discovered": 2, 00:09:29.445 "num_base_bdevs_operational": 4, 00:09:29.445 "base_bdevs_list": [ 00:09:29.445 { 00:09:29.445 "name": "BaseBdev1", 00:09:29.445 "uuid": "53152bf3-093e-4a2d-8cbb-5b8448107f2b", 00:09:29.445 "is_configured": true, 00:09:29.445 "data_offset": 0, 00:09:29.445 "data_size": 65536 00:09:29.445 }, 00:09:29.445 { 00:09:29.445 "name": "BaseBdev2", 00:09:29.445 "uuid": "36946398-fd0f-45f3-9df5-8bafd49c2bca", 00:09:29.445 "is_configured": true, 00:09:29.445 "data_offset": 0, 00:09:29.445 "data_size": 65536 00:09:29.445 }, 00:09:29.445 { 00:09:29.445 "name": "BaseBdev3", 00:09:29.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.445 "is_configured": false, 00:09:29.445 "data_offset": 0, 00:09:29.445 "data_size": 0 00:09:29.445 }, 00:09:29.445 { 00:09:29.445 "name": "BaseBdev4", 00:09:29.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.445 "is_configured": false, 00:09:29.446 "data_offset": 0, 00:09:29.446 "data_size": 0 00:09:29.446 } 00:09:29.446 ] 00:09:29.446 }' 00:09:29.446 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.446 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.705 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:29.705 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.705 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.706 [2024-11-02 23:49:23.769231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.706 BaseBdev3 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.706 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.706 [ 00:09:29.706 { 00:09:29.706 "name": "BaseBdev3", 00:09:29.706 "aliases": [ 00:09:29.706 "7ef7bcd1-f776-4fc4-bfac-28b8edad9653" 00:09:29.706 ], 00:09:29.706 "product_name": "Malloc disk", 00:09:29.706 "block_size": 512, 00:09:29.706 "num_blocks": 65536, 00:09:29.706 "uuid": "7ef7bcd1-f776-4fc4-bfac-28b8edad9653", 00:09:29.706 "assigned_rate_limits": { 00:09:29.706 "rw_ios_per_sec": 0, 00:09:29.706 "rw_mbytes_per_sec": 0, 00:09:29.706 "r_mbytes_per_sec": 0, 00:09:29.706 "w_mbytes_per_sec": 0 00:09:29.706 }, 00:09:29.706 "claimed": true, 00:09:29.706 "claim_type": "exclusive_write", 00:09:29.706 "zoned": false, 00:09:29.965 "supported_io_types": { 00:09:29.965 "read": true, 00:09:29.965 "write": true, 00:09:29.965 "unmap": true, 00:09:29.965 "flush": true, 00:09:29.965 "reset": true, 00:09:29.965 "nvme_admin": false, 00:09:29.965 "nvme_io": false, 00:09:29.965 "nvme_io_md": false, 00:09:29.965 "write_zeroes": true, 00:09:29.965 "zcopy": true, 00:09:29.965 "get_zone_info": false, 00:09:29.965 "zone_management": false, 00:09:29.965 "zone_append": false, 00:09:29.965 "compare": false, 00:09:29.965 "compare_and_write": false, 00:09:29.965 "abort": true, 00:09:29.965 "seek_hole": false, 00:09:29.965 "seek_data": false, 00:09:29.965 "copy": true, 00:09:29.965 "nvme_iov_md": false 00:09:29.965 }, 00:09:29.965 "memory_domains": [ 00:09:29.965 { 00:09:29.965 "dma_device_id": "system", 00:09:29.965 "dma_device_type": 1 00:09:29.965 }, 00:09:29.965 { 00:09:29.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.965 "dma_device_type": 2 00:09:29.965 } 00:09:29.965 ], 00:09:29.965 "driver_specific": {} 00:09:29.965 } 00:09:29.965 ] 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.965 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.965 "name": "Existed_Raid", 00:09:29.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.965 "strip_size_kb": 64, 00:09:29.965 "state": "configuring", 00:09:29.965 "raid_level": "concat", 00:09:29.965 "superblock": false, 00:09:29.965 "num_base_bdevs": 4, 00:09:29.965 "num_base_bdevs_discovered": 3, 00:09:29.965 "num_base_bdevs_operational": 4, 00:09:29.965 "base_bdevs_list": [ 00:09:29.965 { 00:09:29.966 "name": "BaseBdev1", 00:09:29.966 "uuid": "53152bf3-093e-4a2d-8cbb-5b8448107f2b", 00:09:29.966 "is_configured": true, 00:09:29.966 "data_offset": 0, 00:09:29.966 "data_size": 65536 00:09:29.966 }, 00:09:29.966 { 00:09:29.966 "name": "BaseBdev2", 00:09:29.966 "uuid": "36946398-fd0f-45f3-9df5-8bafd49c2bca", 00:09:29.966 "is_configured": true, 00:09:29.966 "data_offset": 0, 00:09:29.966 "data_size": 65536 00:09:29.966 }, 00:09:29.966 { 00:09:29.966 "name": "BaseBdev3", 00:09:29.966 "uuid": "7ef7bcd1-f776-4fc4-bfac-28b8edad9653", 00:09:29.966 "is_configured": true, 00:09:29.966 "data_offset": 0, 00:09:29.966 "data_size": 65536 00:09:29.966 }, 00:09:29.966 { 00:09:29.966 "name": "BaseBdev4", 00:09:29.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.966 "is_configured": false, 00:09:29.966 "data_offset": 0, 00:09:29.966 "data_size": 0 00:09:29.966 } 00:09:29.966 ] 00:09:29.966 }' 00:09:29.966 23:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.966 23:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.227 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.228 [2024-11-02 23:49:24.235269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:30.228 [2024-11-02 23:49:24.235320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:30.228 [2024-11-02 23:49:24.235328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:30.228 [2024-11-02 23:49:24.235586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:30.228 [2024-11-02 23:49:24.235720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:30.228 [2024-11-02 23:49:24.235732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:30.228 [2024-11-02 23:49:24.235944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.228 BaseBdev4 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.228 [ 00:09:30.228 { 00:09:30.228 "name": "BaseBdev4", 00:09:30.228 "aliases": [ 00:09:30.228 "56e96f81-5d78-493a-a710-2d116868c4c1" 00:09:30.228 ], 00:09:30.228 "product_name": "Malloc disk", 00:09:30.228 "block_size": 512, 00:09:30.228 "num_blocks": 65536, 00:09:30.228 "uuid": "56e96f81-5d78-493a-a710-2d116868c4c1", 00:09:30.228 "assigned_rate_limits": { 00:09:30.228 "rw_ios_per_sec": 0, 00:09:30.228 "rw_mbytes_per_sec": 0, 00:09:30.228 "r_mbytes_per_sec": 0, 00:09:30.228 "w_mbytes_per_sec": 0 00:09:30.228 }, 00:09:30.228 "claimed": true, 00:09:30.228 "claim_type": "exclusive_write", 00:09:30.228 "zoned": false, 00:09:30.228 "supported_io_types": { 00:09:30.228 "read": true, 00:09:30.228 "write": true, 00:09:30.228 "unmap": true, 00:09:30.228 "flush": true, 00:09:30.228 "reset": true, 00:09:30.228 "nvme_admin": false, 00:09:30.228 "nvme_io": false, 00:09:30.228 "nvme_io_md": false, 00:09:30.228 "write_zeroes": true, 00:09:30.228 "zcopy": true, 00:09:30.228 "get_zone_info": false, 00:09:30.228 "zone_management": false, 00:09:30.228 "zone_append": false, 00:09:30.228 "compare": false, 00:09:30.228 "compare_and_write": false, 00:09:30.228 "abort": true, 00:09:30.228 "seek_hole": false, 00:09:30.228 "seek_data": false, 00:09:30.228 "copy": true, 00:09:30.228 "nvme_iov_md": false 00:09:30.228 }, 00:09:30.228 "memory_domains": [ 00:09:30.228 { 00:09:30.228 "dma_device_id": "system", 00:09:30.228 "dma_device_type": 1 00:09:30.228 }, 00:09:30.228 { 00:09:30.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.228 "dma_device_type": 2 00:09:30.228 } 00:09:30.228 ], 00:09:30.228 "driver_specific": {} 00:09:30.228 } 00:09:30.228 ] 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.228 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.491 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.491 "name": "Existed_Raid", 00:09:30.491 "uuid": "5e4692ed-117b-49cb-b0c2-cc49794ebb2b", 00:09:30.491 "strip_size_kb": 64, 00:09:30.491 "state": "online", 00:09:30.491 "raid_level": "concat", 00:09:30.491 "superblock": false, 00:09:30.491 "num_base_bdevs": 4, 00:09:30.491 "num_base_bdevs_discovered": 4, 00:09:30.491 "num_base_bdevs_operational": 4, 00:09:30.491 "base_bdevs_list": [ 00:09:30.491 { 00:09:30.491 "name": "BaseBdev1", 00:09:30.491 "uuid": "53152bf3-093e-4a2d-8cbb-5b8448107f2b", 00:09:30.491 "is_configured": true, 00:09:30.491 "data_offset": 0, 00:09:30.491 "data_size": 65536 00:09:30.491 }, 00:09:30.491 { 00:09:30.491 "name": "BaseBdev2", 00:09:30.491 "uuid": "36946398-fd0f-45f3-9df5-8bafd49c2bca", 00:09:30.491 "is_configured": true, 00:09:30.491 "data_offset": 0, 00:09:30.491 "data_size": 65536 00:09:30.491 }, 00:09:30.491 { 00:09:30.491 "name": "BaseBdev3", 00:09:30.491 "uuid": "7ef7bcd1-f776-4fc4-bfac-28b8edad9653", 00:09:30.491 "is_configured": true, 00:09:30.491 "data_offset": 0, 00:09:30.491 "data_size": 65536 00:09:30.491 }, 00:09:30.491 { 00:09:30.491 "name": "BaseBdev4", 00:09:30.491 "uuid": "56e96f81-5d78-493a-a710-2d116868c4c1", 00:09:30.491 "is_configured": true, 00:09:30.491 "data_offset": 0, 00:09:30.491 "data_size": 65536 00:09:30.491 } 00:09:30.491 ] 00:09:30.491 }' 00:09:30.491 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.491 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.750 [2024-11-02 23:49:24.730921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.750 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.750 "name": "Existed_Raid", 00:09:30.750 "aliases": [ 00:09:30.750 "5e4692ed-117b-49cb-b0c2-cc49794ebb2b" 00:09:30.750 ], 00:09:30.750 "product_name": "Raid Volume", 00:09:30.750 "block_size": 512, 00:09:30.750 "num_blocks": 262144, 00:09:30.750 "uuid": "5e4692ed-117b-49cb-b0c2-cc49794ebb2b", 00:09:30.750 "assigned_rate_limits": { 00:09:30.750 "rw_ios_per_sec": 0, 00:09:30.750 "rw_mbytes_per_sec": 0, 00:09:30.750 "r_mbytes_per_sec": 0, 00:09:30.750 "w_mbytes_per_sec": 0 00:09:30.750 }, 00:09:30.750 "claimed": false, 00:09:30.750 "zoned": false, 00:09:30.750 "supported_io_types": { 00:09:30.750 "read": true, 00:09:30.750 "write": true, 00:09:30.750 "unmap": true, 00:09:30.750 "flush": true, 00:09:30.750 "reset": true, 00:09:30.750 "nvme_admin": false, 00:09:30.750 "nvme_io": false, 00:09:30.750 "nvme_io_md": false, 00:09:30.750 "write_zeroes": true, 00:09:30.750 "zcopy": false, 00:09:30.750 "get_zone_info": false, 00:09:30.750 "zone_management": false, 00:09:30.750 "zone_append": false, 00:09:30.750 "compare": false, 00:09:30.750 "compare_and_write": false, 00:09:30.750 "abort": false, 00:09:30.750 "seek_hole": false, 00:09:30.750 "seek_data": false, 00:09:30.750 "copy": false, 00:09:30.750 "nvme_iov_md": false 00:09:30.750 }, 00:09:30.750 "memory_domains": [ 00:09:30.750 { 00:09:30.750 "dma_device_id": "system", 00:09:30.750 "dma_device_type": 1 00:09:30.750 }, 00:09:30.750 { 00:09:30.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.750 "dma_device_type": 2 00:09:30.750 }, 00:09:30.750 { 00:09:30.750 "dma_device_id": "system", 00:09:30.750 "dma_device_type": 1 00:09:30.750 }, 00:09:30.750 { 00:09:30.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.750 "dma_device_type": 2 00:09:30.750 }, 00:09:30.750 { 00:09:30.750 "dma_device_id": "system", 00:09:30.750 "dma_device_type": 1 00:09:30.750 }, 00:09:30.750 { 00:09:30.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.750 "dma_device_type": 2 00:09:30.750 }, 00:09:30.750 { 00:09:30.750 "dma_device_id": "system", 00:09:30.750 "dma_device_type": 1 00:09:30.750 }, 00:09:30.750 { 00:09:30.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.750 "dma_device_type": 2 00:09:30.750 } 00:09:30.750 ], 00:09:30.750 "driver_specific": { 00:09:30.750 "raid": { 00:09:30.750 "uuid": "5e4692ed-117b-49cb-b0c2-cc49794ebb2b", 00:09:30.750 "strip_size_kb": 64, 00:09:30.750 "state": "online", 00:09:30.750 "raid_level": "concat", 00:09:30.751 "superblock": false, 00:09:30.751 "num_base_bdevs": 4, 00:09:30.751 "num_base_bdevs_discovered": 4, 00:09:30.751 "num_base_bdevs_operational": 4, 00:09:30.751 "base_bdevs_list": [ 00:09:30.751 { 00:09:30.751 "name": "BaseBdev1", 00:09:30.751 "uuid": "53152bf3-093e-4a2d-8cbb-5b8448107f2b", 00:09:30.751 "is_configured": true, 00:09:30.751 "data_offset": 0, 00:09:30.751 "data_size": 65536 00:09:30.751 }, 00:09:30.751 { 00:09:30.751 "name": "BaseBdev2", 00:09:30.751 "uuid": "36946398-fd0f-45f3-9df5-8bafd49c2bca", 00:09:30.751 "is_configured": true, 00:09:30.751 "data_offset": 0, 00:09:30.751 "data_size": 65536 00:09:30.751 }, 00:09:30.751 { 00:09:30.751 "name": "BaseBdev3", 00:09:30.751 "uuid": "7ef7bcd1-f776-4fc4-bfac-28b8edad9653", 00:09:30.751 "is_configured": true, 00:09:30.751 "data_offset": 0, 00:09:30.751 "data_size": 65536 00:09:30.751 }, 00:09:30.751 { 00:09:30.751 "name": "BaseBdev4", 00:09:30.751 "uuid": "56e96f81-5d78-493a-a710-2d116868c4c1", 00:09:30.751 "is_configured": true, 00:09:30.751 "data_offset": 0, 00:09:30.751 "data_size": 65536 00:09:30.751 } 00:09:30.751 ] 00:09:30.751 } 00:09:30.751 } 00:09:30.751 }' 00:09:30.751 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.751 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:30.751 BaseBdev2 00:09:30.751 BaseBdev3 00:09:30.751 BaseBdev4' 00:09:30.751 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:31.009 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.010 23:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.010 23:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.010 [2024-11-02 23:49:25.042019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.010 [2024-11-02 23:49:25.042055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.010 [2024-11-02 23:49:25.042108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.010 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.269 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.269 "name": "Existed_Raid", 00:09:31.269 "uuid": "5e4692ed-117b-49cb-b0c2-cc49794ebb2b", 00:09:31.269 "strip_size_kb": 64, 00:09:31.269 "state": "offline", 00:09:31.269 "raid_level": "concat", 00:09:31.269 "superblock": false, 00:09:31.269 "num_base_bdevs": 4, 00:09:31.269 "num_base_bdevs_discovered": 3, 00:09:31.269 "num_base_bdevs_operational": 3, 00:09:31.269 "base_bdevs_list": [ 00:09:31.269 { 00:09:31.269 "name": null, 00:09:31.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.269 "is_configured": false, 00:09:31.269 "data_offset": 0, 00:09:31.269 "data_size": 65536 00:09:31.269 }, 00:09:31.269 { 00:09:31.269 "name": "BaseBdev2", 00:09:31.269 "uuid": "36946398-fd0f-45f3-9df5-8bafd49c2bca", 00:09:31.269 "is_configured": true, 00:09:31.269 "data_offset": 0, 00:09:31.269 "data_size": 65536 00:09:31.269 }, 00:09:31.269 { 00:09:31.269 "name": "BaseBdev3", 00:09:31.269 "uuid": "7ef7bcd1-f776-4fc4-bfac-28b8edad9653", 00:09:31.269 "is_configured": true, 00:09:31.269 "data_offset": 0, 00:09:31.269 "data_size": 65536 00:09:31.269 }, 00:09:31.269 { 00:09:31.269 "name": "BaseBdev4", 00:09:31.269 "uuid": "56e96f81-5d78-493a-a710-2d116868c4c1", 00:09:31.269 "is_configured": true, 00:09:31.269 "data_offset": 0, 00:09:31.269 "data_size": 65536 00:09:31.269 } 00:09:31.269 ] 00:09:31.269 }' 00:09:31.269 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.269 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.530 [2024-11-02 23:49:25.524439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.530 [2024-11-02 23:49:25.587391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.530 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.791 [2024-11-02 23:49:25.646369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:31.791 [2024-11-02 23:49:25.646424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.791 BaseBdev2 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.791 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.791 [ 00:09:31.791 { 00:09:31.791 "name": "BaseBdev2", 00:09:31.791 "aliases": [ 00:09:31.791 "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe" 00:09:31.792 ], 00:09:31.792 "product_name": "Malloc disk", 00:09:31.792 "block_size": 512, 00:09:31.792 "num_blocks": 65536, 00:09:31.792 "uuid": "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe", 00:09:31.792 "assigned_rate_limits": { 00:09:31.792 "rw_ios_per_sec": 0, 00:09:31.792 "rw_mbytes_per_sec": 0, 00:09:31.792 "r_mbytes_per_sec": 0, 00:09:31.792 "w_mbytes_per_sec": 0 00:09:31.792 }, 00:09:31.792 "claimed": false, 00:09:31.792 "zoned": false, 00:09:31.792 "supported_io_types": { 00:09:31.792 "read": true, 00:09:31.792 "write": true, 00:09:31.792 "unmap": true, 00:09:31.792 "flush": true, 00:09:31.792 "reset": true, 00:09:31.792 "nvme_admin": false, 00:09:31.792 "nvme_io": false, 00:09:31.792 "nvme_io_md": false, 00:09:31.792 "write_zeroes": true, 00:09:31.792 "zcopy": true, 00:09:31.792 "get_zone_info": false, 00:09:31.792 "zone_management": false, 00:09:31.792 "zone_append": false, 00:09:31.792 "compare": false, 00:09:31.792 "compare_and_write": false, 00:09:31.792 "abort": true, 00:09:31.792 "seek_hole": false, 00:09:31.792 "seek_data": false, 00:09:31.792 "copy": true, 00:09:31.792 "nvme_iov_md": false 00:09:31.792 }, 00:09:31.792 "memory_domains": [ 00:09:31.792 { 00:09:31.792 "dma_device_id": "system", 00:09:31.792 "dma_device_type": 1 00:09:31.792 }, 00:09:31.792 { 00:09:31.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.792 "dma_device_type": 2 00:09:31.792 } 00:09:31.792 ], 00:09:31.792 "driver_specific": {} 00:09:31.792 } 00:09:31.792 ] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.792 BaseBdev3 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.792 [ 00:09:31.792 { 00:09:31.792 "name": "BaseBdev3", 00:09:31.792 "aliases": [ 00:09:31.792 "c40b8c64-bf96-4bb6-b71b-f8b41b023a53" 00:09:31.792 ], 00:09:31.792 "product_name": "Malloc disk", 00:09:31.792 "block_size": 512, 00:09:31.792 "num_blocks": 65536, 00:09:31.792 "uuid": "c40b8c64-bf96-4bb6-b71b-f8b41b023a53", 00:09:31.792 "assigned_rate_limits": { 00:09:31.792 "rw_ios_per_sec": 0, 00:09:31.792 "rw_mbytes_per_sec": 0, 00:09:31.792 "r_mbytes_per_sec": 0, 00:09:31.792 "w_mbytes_per_sec": 0 00:09:31.792 }, 00:09:31.792 "claimed": false, 00:09:31.792 "zoned": false, 00:09:31.792 "supported_io_types": { 00:09:31.792 "read": true, 00:09:31.792 "write": true, 00:09:31.792 "unmap": true, 00:09:31.792 "flush": true, 00:09:31.792 "reset": true, 00:09:31.792 "nvme_admin": false, 00:09:31.792 "nvme_io": false, 00:09:31.792 "nvme_io_md": false, 00:09:31.792 "write_zeroes": true, 00:09:31.792 "zcopy": true, 00:09:31.792 "get_zone_info": false, 00:09:31.792 "zone_management": false, 00:09:31.792 "zone_append": false, 00:09:31.792 "compare": false, 00:09:31.792 "compare_and_write": false, 00:09:31.792 "abort": true, 00:09:31.792 "seek_hole": false, 00:09:31.792 "seek_data": false, 00:09:31.792 "copy": true, 00:09:31.792 "nvme_iov_md": false 00:09:31.792 }, 00:09:31.792 "memory_domains": [ 00:09:31.792 { 00:09:31.792 "dma_device_id": "system", 00:09:31.792 "dma_device_type": 1 00:09:31.792 }, 00:09:31.792 { 00:09:31.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.792 "dma_device_type": 2 00:09:31.792 } 00:09:31.792 ], 00:09:31.792 "driver_specific": {} 00:09:31.792 } 00:09:31.792 ] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.792 BaseBdev4 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.792 [ 00:09:31.792 { 00:09:31.792 "name": "BaseBdev4", 00:09:31.792 "aliases": [ 00:09:31.792 "fe4e342f-949c-4d40-9638-0a3302b3ba17" 00:09:31.792 ], 00:09:31.792 "product_name": "Malloc disk", 00:09:31.792 "block_size": 512, 00:09:31.792 "num_blocks": 65536, 00:09:31.792 "uuid": "fe4e342f-949c-4d40-9638-0a3302b3ba17", 00:09:31.792 "assigned_rate_limits": { 00:09:31.792 "rw_ios_per_sec": 0, 00:09:31.792 "rw_mbytes_per_sec": 0, 00:09:31.792 "r_mbytes_per_sec": 0, 00:09:31.792 "w_mbytes_per_sec": 0 00:09:31.792 }, 00:09:31.792 "claimed": false, 00:09:31.792 "zoned": false, 00:09:31.792 "supported_io_types": { 00:09:31.792 "read": true, 00:09:31.792 "write": true, 00:09:31.792 "unmap": true, 00:09:31.792 "flush": true, 00:09:31.792 "reset": true, 00:09:31.792 "nvme_admin": false, 00:09:31.792 "nvme_io": false, 00:09:31.792 "nvme_io_md": false, 00:09:31.792 "write_zeroes": true, 00:09:31.792 "zcopy": true, 00:09:31.792 "get_zone_info": false, 00:09:31.792 "zone_management": false, 00:09:31.792 "zone_append": false, 00:09:31.792 "compare": false, 00:09:31.792 "compare_and_write": false, 00:09:31.792 "abort": true, 00:09:31.792 "seek_hole": false, 00:09:31.792 "seek_data": false, 00:09:31.792 "copy": true, 00:09:31.792 "nvme_iov_md": false 00:09:31.792 }, 00:09:31.792 "memory_domains": [ 00:09:31.792 { 00:09:31.792 "dma_device_id": "system", 00:09:31.792 "dma_device_type": 1 00:09:31.792 }, 00:09:31.792 { 00:09:31.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.792 "dma_device_type": 2 00:09:31.792 } 00:09:31.792 ], 00:09:31.792 "driver_specific": {} 00:09:31.792 } 00:09:31.792 ] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.792 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.792 [2024-11-02 23:49:25.874372] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.792 [2024-11-02 23:49:25.874420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.792 [2024-11-02 23:49:25.874454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.792 [2024-11-02 23:49:25.876183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.793 [2024-11-02 23:49:25.876298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.793 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.050 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.050 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.050 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.050 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.050 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.050 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.050 "name": "Existed_Raid", 00:09:32.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.050 "strip_size_kb": 64, 00:09:32.050 "state": "configuring", 00:09:32.050 "raid_level": "concat", 00:09:32.050 "superblock": false, 00:09:32.050 "num_base_bdevs": 4, 00:09:32.050 "num_base_bdevs_discovered": 3, 00:09:32.050 "num_base_bdevs_operational": 4, 00:09:32.050 "base_bdevs_list": [ 00:09:32.050 { 00:09:32.050 "name": "BaseBdev1", 00:09:32.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.050 "is_configured": false, 00:09:32.050 "data_offset": 0, 00:09:32.050 "data_size": 0 00:09:32.050 }, 00:09:32.050 { 00:09:32.050 "name": "BaseBdev2", 00:09:32.050 "uuid": "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe", 00:09:32.050 "is_configured": true, 00:09:32.050 "data_offset": 0, 00:09:32.050 "data_size": 65536 00:09:32.050 }, 00:09:32.050 { 00:09:32.050 "name": "BaseBdev3", 00:09:32.050 "uuid": "c40b8c64-bf96-4bb6-b71b-f8b41b023a53", 00:09:32.050 "is_configured": true, 00:09:32.050 "data_offset": 0, 00:09:32.050 "data_size": 65536 00:09:32.050 }, 00:09:32.050 { 00:09:32.050 "name": "BaseBdev4", 00:09:32.050 "uuid": "fe4e342f-949c-4d40-9638-0a3302b3ba17", 00:09:32.050 "is_configured": true, 00:09:32.050 "data_offset": 0, 00:09:32.050 "data_size": 65536 00:09:32.050 } 00:09:32.050 ] 00:09:32.050 }' 00:09:32.050 23:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.050 23:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.309 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:32.309 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.309 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.309 [2024-11-02 23:49:26.257696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.309 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.309 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:32.309 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.310 "name": "Existed_Raid", 00:09:32.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.310 "strip_size_kb": 64, 00:09:32.310 "state": "configuring", 00:09:32.310 "raid_level": "concat", 00:09:32.310 "superblock": false, 00:09:32.310 "num_base_bdevs": 4, 00:09:32.310 "num_base_bdevs_discovered": 2, 00:09:32.310 "num_base_bdevs_operational": 4, 00:09:32.310 "base_bdevs_list": [ 00:09:32.310 { 00:09:32.310 "name": "BaseBdev1", 00:09:32.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.310 "is_configured": false, 00:09:32.310 "data_offset": 0, 00:09:32.310 "data_size": 0 00:09:32.310 }, 00:09:32.310 { 00:09:32.310 "name": null, 00:09:32.310 "uuid": "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe", 00:09:32.310 "is_configured": false, 00:09:32.310 "data_offset": 0, 00:09:32.310 "data_size": 65536 00:09:32.310 }, 00:09:32.310 { 00:09:32.310 "name": "BaseBdev3", 00:09:32.310 "uuid": "c40b8c64-bf96-4bb6-b71b-f8b41b023a53", 00:09:32.310 "is_configured": true, 00:09:32.310 "data_offset": 0, 00:09:32.310 "data_size": 65536 00:09:32.310 }, 00:09:32.310 { 00:09:32.310 "name": "BaseBdev4", 00:09:32.310 "uuid": "fe4e342f-949c-4d40-9638-0a3302b3ba17", 00:09:32.310 "is_configured": true, 00:09:32.310 "data_offset": 0, 00:09:32.310 "data_size": 65536 00:09:32.310 } 00:09:32.310 ] 00:09:32.310 }' 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.310 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.883 [2024-11-02 23:49:26.755594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.883 BaseBdev1 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.883 [ 00:09:32.883 { 00:09:32.883 "name": "BaseBdev1", 00:09:32.883 "aliases": [ 00:09:32.883 "abc00248-8d4a-4edf-994e-0ce2b48eae23" 00:09:32.883 ], 00:09:32.883 "product_name": "Malloc disk", 00:09:32.883 "block_size": 512, 00:09:32.883 "num_blocks": 65536, 00:09:32.883 "uuid": "abc00248-8d4a-4edf-994e-0ce2b48eae23", 00:09:32.883 "assigned_rate_limits": { 00:09:32.883 "rw_ios_per_sec": 0, 00:09:32.883 "rw_mbytes_per_sec": 0, 00:09:32.883 "r_mbytes_per_sec": 0, 00:09:32.883 "w_mbytes_per_sec": 0 00:09:32.883 }, 00:09:32.883 "claimed": true, 00:09:32.883 "claim_type": "exclusive_write", 00:09:32.883 "zoned": false, 00:09:32.883 "supported_io_types": { 00:09:32.883 "read": true, 00:09:32.883 "write": true, 00:09:32.883 "unmap": true, 00:09:32.883 "flush": true, 00:09:32.883 "reset": true, 00:09:32.883 "nvme_admin": false, 00:09:32.883 "nvme_io": false, 00:09:32.883 "nvme_io_md": false, 00:09:32.883 "write_zeroes": true, 00:09:32.883 "zcopy": true, 00:09:32.883 "get_zone_info": false, 00:09:32.883 "zone_management": false, 00:09:32.883 "zone_append": false, 00:09:32.883 "compare": false, 00:09:32.883 "compare_and_write": false, 00:09:32.883 "abort": true, 00:09:32.883 "seek_hole": false, 00:09:32.883 "seek_data": false, 00:09:32.883 "copy": true, 00:09:32.883 "nvme_iov_md": false 00:09:32.883 }, 00:09:32.883 "memory_domains": [ 00:09:32.883 { 00:09:32.883 "dma_device_id": "system", 00:09:32.883 "dma_device_type": 1 00:09:32.883 }, 00:09:32.883 { 00:09:32.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.883 "dma_device_type": 2 00:09:32.883 } 00:09:32.883 ], 00:09:32.883 "driver_specific": {} 00:09:32.883 } 00:09:32.883 ] 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.883 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.883 "name": "Existed_Raid", 00:09:32.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.883 "strip_size_kb": 64, 00:09:32.883 "state": "configuring", 00:09:32.883 "raid_level": "concat", 00:09:32.883 "superblock": false, 00:09:32.883 "num_base_bdevs": 4, 00:09:32.883 "num_base_bdevs_discovered": 3, 00:09:32.883 "num_base_bdevs_operational": 4, 00:09:32.883 "base_bdevs_list": [ 00:09:32.883 { 00:09:32.883 "name": "BaseBdev1", 00:09:32.883 "uuid": "abc00248-8d4a-4edf-994e-0ce2b48eae23", 00:09:32.883 "is_configured": true, 00:09:32.883 "data_offset": 0, 00:09:32.883 "data_size": 65536 00:09:32.883 }, 00:09:32.883 { 00:09:32.883 "name": null, 00:09:32.883 "uuid": "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe", 00:09:32.883 "is_configured": false, 00:09:32.883 "data_offset": 0, 00:09:32.883 "data_size": 65536 00:09:32.883 }, 00:09:32.883 { 00:09:32.883 "name": "BaseBdev3", 00:09:32.883 "uuid": "c40b8c64-bf96-4bb6-b71b-f8b41b023a53", 00:09:32.883 "is_configured": true, 00:09:32.884 "data_offset": 0, 00:09:32.884 "data_size": 65536 00:09:32.884 }, 00:09:32.884 { 00:09:32.884 "name": "BaseBdev4", 00:09:32.884 "uuid": "fe4e342f-949c-4d40-9638-0a3302b3ba17", 00:09:32.884 "is_configured": true, 00:09:32.884 "data_offset": 0, 00:09:32.884 "data_size": 65536 00:09:32.884 } 00:09:32.884 ] 00:09:32.884 }' 00:09:32.884 23:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.884 23:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.143 [2024-11-02 23:49:27.182930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.143 "name": "Existed_Raid", 00:09:33.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.143 "strip_size_kb": 64, 00:09:33.143 "state": "configuring", 00:09:33.143 "raid_level": "concat", 00:09:33.143 "superblock": false, 00:09:33.143 "num_base_bdevs": 4, 00:09:33.143 "num_base_bdevs_discovered": 2, 00:09:33.143 "num_base_bdevs_operational": 4, 00:09:33.143 "base_bdevs_list": [ 00:09:33.143 { 00:09:33.143 "name": "BaseBdev1", 00:09:33.143 "uuid": "abc00248-8d4a-4edf-994e-0ce2b48eae23", 00:09:33.143 "is_configured": true, 00:09:33.143 "data_offset": 0, 00:09:33.143 "data_size": 65536 00:09:33.143 }, 00:09:33.143 { 00:09:33.143 "name": null, 00:09:33.143 "uuid": "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe", 00:09:33.143 "is_configured": false, 00:09:33.143 "data_offset": 0, 00:09:33.143 "data_size": 65536 00:09:33.143 }, 00:09:33.143 { 00:09:33.143 "name": null, 00:09:33.143 "uuid": "c40b8c64-bf96-4bb6-b71b-f8b41b023a53", 00:09:33.143 "is_configured": false, 00:09:33.143 "data_offset": 0, 00:09:33.143 "data_size": 65536 00:09:33.143 }, 00:09:33.143 { 00:09:33.143 "name": "BaseBdev4", 00:09:33.143 "uuid": "fe4e342f-949c-4d40-9638-0a3302b3ba17", 00:09:33.143 "is_configured": true, 00:09:33.143 "data_offset": 0, 00:09:33.143 "data_size": 65536 00:09:33.143 } 00:09:33.143 ] 00:09:33.143 }' 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.143 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.713 [2024-11-02 23:49:27.690143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.713 "name": "Existed_Raid", 00:09:33.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.713 "strip_size_kb": 64, 00:09:33.713 "state": "configuring", 00:09:33.713 "raid_level": "concat", 00:09:33.713 "superblock": false, 00:09:33.713 "num_base_bdevs": 4, 00:09:33.713 "num_base_bdevs_discovered": 3, 00:09:33.713 "num_base_bdevs_operational": 4, 00:09:33.713 "base_bdevs_list": [ 00:09:33.713 { 00:09:33.713 "name": "BaseBdev1", 00:09:33.713 "uuid": "abc00248-8d4a-4edf-994e-0ce2b48eae23", 00:09:33.713 "is_configured": true, 00:09:33.713 "data_offset": 0, 00:09:33.713 "data_size": 65536 00:09:33.713 }, 00:09:33.713 { 00:09:33.713 "name": null, 00:09:33.713 "uuid": "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe", 00:09:33.713 "is_configured": false, 00:09:33.713 "data_offset": 0, 00:09:33.713 "data_size": 65536 00:09:33.713 }, 00:09:33.713 { 00:09:33.713 "name": "BaseBdev3", 00:09:33.713 "uuid": "c40b8c64-bf96-4bb6-b71b-f8b41b023a53", 00:09:33.713 "is_configured": true, 00:09:33.713 "data_offset": 0, 00:09:33.713 "data_size": 65536 00:09:33.713 }, 00:09:33.713 { 00:09:33.713 "name": "BaseBdev4", 00:09:33.713 "uuid": "fe4e342f-949c-4d40-9638-0a3302b3ba17", 00:09:33.713 "is_configured": true, 00:09:33.713 "data_offset": 0, 00:09:33.713 "data_size": 65536 00:09:33.713 } 00:09:33.713 ] 00:09:33.713 }' 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.713 23:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.283 [2024-11-02 23:49:28.169374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.283 "name": "Existed_Raid", 00:09:34.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.283 "strip_size_kb": 64, 00:09:34.283 "state": "configuring", 00:09:34.283 "raid_level": "concat", 00:09:34.283 "superblock": false, 00:09:34.283 "num_base_bdevs": 4, 00:09:34.283 "num_base_bdevs_discovered": 2, 00:09:34.283 "num_base_bdevs_operational": 4, 00:09:34.283 "base_bdevs_list": [ 00:09:34.283 { 00:09:34.283 "name": null, 00:09:34.283 "uuid": "abc00248-8d4a-4edf-994e-0ce2b48eae23", 00:09:34.283 "is_configured": false, 00:09:34.283 "data_offset": 0, 00:09:34.283 "data_size": 65536 00:09:34.283 }, 00:09:34.283 { 00:09:34.283 "name": null, 00:09:34.283 "uuid": "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe", 00:09:34.283 "is_configured": false, 00:09:34.283 "data_offset": 0, 00:09:34.283 "data_size": 65536 00:09:34.283 }, 00:09:34.283 { 00:09:34.283 "name": "BaseBdev3", 00:09:34.283 "uuid": "c40b8c64-bf96-4bb6-b71b-f8b41b023a53", 00:09:34.283 "is_configured": true, 00:09:34.283 "data_offset": 0, 00:09:34.283 "data_size": 65536 00:09:34.283 }, 00:09:34.283 { 00:09:34.283 "name": "BaseBdev4", 00:09:34.283 "uuid": "fe4e342f-949c-4d40-9638-0a3302b3ba17", 00:09:34.283 "is_configured": true, 00:09:34.283 "data_offset": 0, 00:09:34.283 "data_size": 65536 00:09:34.283 } 00:09:34.283 ] 00:09:34.283 }' 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.283 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.543 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.543 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.543 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.543 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.543 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.543 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:34.543 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:34.543 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.543 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.803 [2024-11-02 23:49:28.639209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.803 "name": "Existed_Raid", 00:09:34.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.803 "strip_size_kb": 64, 00:09:34.803 "state": "configuring", 00:09:34.803 "raid_level": "concat", 00:09:34.803 "superblock": false, 00:09:34.803 "num_base_bdevs": 4, 00:09:34.803 "num_base_bdevs_discovered": 3, 00:09:34.803 "num_base_bdevs_operational": 4, 00:09:34.803 "base_bdevs_list": [ 00:09:34.803 { 00:09:34.803 "name": null, 00:09:34.803 "uuid": "abc00248-8d4a-4edf-994e-0ce2b48eae23", 00:09:34.803 "is_configured": false, 00:09:34.803 "data_offset": 0, 00:09:34.803 "data_size": 65536 00:09:34.803 }, 00:09:34.803 { 00:09:34.803 "name": "BaseBdev2", 00:09:34.803 "uuid": "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe", 00:09:34.803 "is_configured": true, 00:09:34.803 "data_offset": 0, 00:09:34.803 "data_size": 65536 00:09:34.803 }, 00:09:34.803 { 00:09:34.803 "name": "BaseBdev3", 00:09:34.803 "uuid": "c40b8c64-bf96-4bb6-b71b-f8b41b023a53", 00:09:34.803 "is_configured": true, 00:09:34.803 "data_offset": 0, 00:09:34.803 "data_size": 65536 00:09:34.803 }, 00:09:34.803 { 00:09:34.803 "name": "BaseBdev4", 00:09:34.803 "uuid": "fe4e342f-949c-4d40-9638-0a3302b3ba17", 00:09:34.803 "is_configured": true, 00:09:34.803 "data_offset": 0, 00:09:34.803 "data_size": 65536 00:09:34.803 } 00:09:34.803 ] 00:09:34.803 }' 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.803 23:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u abc00248-8d4a-4edf-994e-0ce2b48eae23 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.063 [2024-11-02 23:49:29.149185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:35.063 [2024-11-02 23:49:29.149320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:35.063 [2024-11-02 23:49:29.149332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:35.063 [2024-11-02 23:49:29.149578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:35.063 [2024-11-02 23:49:29.149686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:35.063 [2024-11-02 23:49:29.149696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:35.063 [2024-11-02 23:49:29.149873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.063 NewBaseBdev 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.063 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.321 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.321 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:35.321 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.321 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.321 [ 00:09:35.321 { 00:09:35.322 "name": "NewBaseBdev", 00:09:35.322 "aliases": [ 00:09:35.322 "abc00248-8d4a-4edf-994e-0ce2b48eae23" 00:09:35.322 ], 00:09:35.322 "product_name": "Malloc disk", 00:09:35.322 "block_size": 512, 00:09:35.322 "num_blocks": 65536, 00:09:35.322 "uuid": "abc00248-8d4a-4edf-994e-0ce2b48eae23", 00:09:35.322 "assigned_rate_limits": { 00:09:35.322 "rw_ios_per_sec": 0, 00:09:35.322 "rw_mbytes_per_sec": 0, 00:09:35.322 "r_mbytes_per_sec": 0, 00:09:35.322 "w_mbytes_per_sec": 0 00:09:35.322 }, 00:09:35.322 "claimed": true, 00:09:35.322 "claim_type": "exclusive_write", 00:09:35.322 "zoned": false, 00:09:35.322 "supported_io_types": { 00:09:35.322 "read": true, 00:09:35.322 "write": true, 00:09:35.322 "unmap": true, 00:09:35.322 "flush": true, 00:09:35.322 "reset": true, 00:09:35.322 "nvme_admin": false, 00:09:35.322 "nvme_io": false, 00:09:35.322 "nvme_io_md": false, 00:09:35.322 "write_zeroes": true, 00:09:35.322 "zcopy": true, 00:09:35.322 "get_zone_info": false, 00:09:35.322 "zone_management": false, 00:09:35.322 "zone_append": false, 00:09:35.322 "compare": false, 00:09:35.322 "compare_and_write": false, 00:09:35.322 "abort": true, 00:09:35.322 "seek_hole": false, 00:09:35.322 "seek_data": false, 00:09:35.322 "copy": true, 00:09:35.322 "nvme_iov_md": false 00:09:35.322 }, 00:09:35.322 "memory_domains": [ 00:09:35.322 { 00:09:35.322 "dma_device_id": "system", 00:09:35.322 "dma_device_type": 1 00:09:35.322 }, 00:09:35.322 { 00:09:35.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.322 "dma_device_type": 2 00:09:35.322 } 00:09:35.322 ], 00:09:35.322 "driver_specific": {} 00:09:35.322 } 00:09:35.322 ] 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.322 "name": "Existed_Raid", 00:09:35.322 "uuid": "fcd28700-9ce1-4b0a-9bcb-43026777f53a", 00:09:35.322 "strip_size_kb": 64, 00:09:35.322 "state": "online", 00:09:35.322 "raid_level": "concat", 00:09:35.322 "superblock": false, 00:09:35.322 "num_base_bdevs": 4, 00:09:35.322 "num_base_bdevs_discovered": 4, 00:09:35.322 "num_base_bdevs_operational": 4, 00:09:35.322 "base_bdevs_list": [ 00:09:35.322 { 00:09:35.322 "name": "NewBaseBdev", 00:09:35.322 "uuid": "abc00248-8d4a-4edf-994e-0ce2b48eae23", 00:09:35.322 "is_configured": true, 00:09:35.322 "data_offset": 0, 00:09:35.322 "data_size": 65536 00:09:35.322 }, 00:09:35.322 { 00:09:35.322 "name": "BaseBdev2", 00:09:35.322 "uuid": "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe", 00:09:35.322 "is_configured": true, 00:09:35.322 "data_offset": 0, 00:09:35.322 "data_size": 65536 00:09:35.322 }, 00:09:35.322 { 00:09:35.322 "name": "BaseBdev3", 00:09:35.322 "uuid": "c40b8c64-bf96-4bb6-b71b-f8b41b023a53", 00:09:35.322 "is_configured": true, 00:09:35.322 "data_offset": 0, 00:09:35.322 "data_size": 65536 00:09:35.322 }, 00:09:35.322 { 00:09:35.322 "name": "BaseBdev4", 00:09:35.322 "uuid": "fe4e342f-949c-4d40-9638-0a3302b3ba17", 00:09:35.322 "is_configured": true, 00:09:35.322 "data_offset": 0, 00:09:35.322 "data_size": 65536 00:09:35.322 } 00:09:35.322 ] 00:09:35.322 }' 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.322 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.582 [2024-11-02 23:49:29.604801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.582 "name": "Existed_Raid", 00:09:35.582 "aliases": [ 00:09:35.582 "fcd28700-9ce1-4b0a-9bcb-43026777f53a" 00:09:35.582 ], 00:09:35.582 "product_name": "Raid Volume", 00:09:35.582 "block_size": 512, 00:09:35.582 "num_blocks": 262144, 00:09:35.582 "uuid": "fcd28700-9ce1-4b0a-9bcb-43026777f53a", 00:09:35.582 "assigned_rate_limits": { 00:09:35.582 "rw_ios_per_sec": 0, 00:09:35.582 "rw_mbytes_per_sec": 0, 00:09:35.582 "r_mbytes_per_sec": 0, 00:09:35.582 "w_mbytes_per_sec": 0 00:09:35.582 }, 00:09:35.582 "claimed": false, 00:09:35.582 "zoned": false, 00:09:35.582 "supported_io_types": { 00:09:35.582 "read": true, 00:09:35.582 "write": true, 00:09:35.582 "unmap": true, 00:09:35.582 "flush": true, 00:09:35.582 "reset": true, 00:09:35.582 "nvme_admin": false, 00:09:35.582 "nvme_io": false, 00:09:35.582 "nvme_io_md": false, 00:09:35.582 "write_zeroes": true, 00:09:35.582 "zcopy": false, 00:09:35.582 "get_zone_info": false, 00:09:35.582 "zone_management": false, 00:09:35.582 "zone_append": false, 00:09:35.582 "compare": false, 00:09:35.582 "compare_and_write": false, 00:09:35.582 "abort": false, 00:09:35.582 "seek_hole": false, 00:09:35.582 "seek_data": false, 00:09:35.582 "copy": false, 00:09:35.582 "nvme_iov_md": false 00:09:35.582 }, 00:09:35.582 "memory_domains": [ 00:09:35.582 { 00:09:35.582 "dma_device_id": "system", 00:09:35.582 "dma_device_type": 1 00:09:35.582 }, 00:09:35.582 { 00:09:35.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.582 "dma_device_type": 2 00:09:35.582 }, 00:09:35.582 { 00:09:35.582 "dma_device_id": "system", 00:09:35.582 "dma_device_type": 1 00:09:35.582 }, 00:09:35.582 { 00:09:35.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.582 "dma_device_type": 2 00:09:35.582 }, 00:09:35.582 { 00:09:35.582 "dma_device_id": "system", 00:09:35.582 "dma_device_type": 1 00:09:35.582 }, 00:09:35.582 { 00:09:35.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.582 "dma_device_type": 2 00:09:35.582 }, 00:09:35.582 { 00:09:35.582 "dma_device_id": "system", 00:09:35.582 "dma_device_type": 1 00:09:35.582 }, 00:09:35.582 { 00:09:35.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.582 "dma_device_type": 2 00:09:35.582 } 00:09:35.582 ], 00:09:35.582 "driver_specific": { 00:09:35.582 "raid": { 00:09:35.582 "uuid": "fcd28700-9ce1-4b0a-9bcb-43026777f53a", 00:09:35.582 "strip_size_kb": 64, 00:09:35.582 "state": "online", 00:09:35.582 "raid_level": "concat", 00:09:35.582 "superblock": false, 00:09:35.582 "num_base_bdevs": 4, 00:09:35.582 "num_base_bdevs_discovered": 4, 00:09:35.582 "num_base_bdevs_operational": 4, 00:09:35.582 "base_bdevs_list": [ 00:09:35.582 { 00:09:35.582 "name": "NewBaseBdev", 00:09:35.582 "uuid": "abc00248-8d4a-4edf-994e-0ce2b48eae23", 00:09:35.582 "is_configured": true, 00:09:35.582 "data_offset": 0, 00:09:35.582 "data_size": 65536 00:09:35.582 }, 00:09:35.582 { 00:09:35.582 "name": "BaseBdev2", 00:09:35.582 "uuid": "1c9e29e9-fcd5-4a53-aa0c-47279a278cfe", 00:09:35.582 "is_configured": true, 00:09:35.582 "data_offset": 0, 00:09:35.582 "data_size": 65536 00:09:35.582 }, 00:09:35.582 { 00:09:35.582 "name": "BaseBdev3", 00:09:35.582 "uuid": "c40b8c64-bf96-4bb6-b71b-f8b41b023a53", 00:09:35.582 "is_configured": true, 00:09:35.582 "data_offset": 0, 00:09:35.582 "data_size": 65536 00:09:35.582 }, 00:09:35.582 { 00:09:35.582 "name": "BaseBdev4", 00:09:35.582 "uuid": "fe4e342f-949c-4d40-9638-0a3302b3ba17", 00:09:35.582 "is_configured": true, 00:09:35.582 "data_offset": 0, 00:09:35.582 "data_size": 65536 00:09:35.582 } 00:09:35.582 ] 00:09:35.582 } 00:09:35.582 } 00:09:35.582 }' 00:09:35.582 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:35.842 BaseBdev2 00:09:35.842 BaseBdev3 00:09:35.842 BaseBdev4' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.842 [2024-11-02 23:49:29.915946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.842 [2024-11-02 23:49:29.916022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.842 [2024-11-02 23:49:29.916096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.842 [2024-11-02 23:49:29.916160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.842 [2024-11-02 23:49:29.916170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81990 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 81990 ']' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 81990 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:35.842 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81990 00:09:36.102 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:36.102 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:36.102 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81990' 00:09:36.102 killing process with pid 81990 00:09:36.102 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 81990 00:09:36.102 [2024-11-02 23:49:29.952692] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.102 23:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 81990 00:09:36.102 [2024-11-02 23:49:29.992095] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.102 23:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:36.362 ************************************ 00:09:36.362 END TEST raid_state_function_test 00:09:36.362 ************************************ 00:09:36.362 00:09:36.362 real 0m9.338s 00:09:36.362 user 0m16.003s 00:09:36.362 sys 0m1.985s 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.362 23:49:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:36.362 23:49:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:36.362 23:49:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:36.362 23:49:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.362 ************************************ 00:09:36.362 START TEST raid_state_function_test_sb 00:09:36.362 ************************************ 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:36.362 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82639 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82639' 00:09:36.363 Process raid pid: 82639 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82639 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82639 ']' 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:36.363 23:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.363 [2024-11-02 23:49:30.355191] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:36.363 [2024-11-02 23:49:30.355385] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.622 [2024-11-02 23:49:30.503605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.622 [2024-11-02 23:49:30.529357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.622 [2024-11-02 23:49:30.571714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.622 [2024-11-02 23:49:30.571853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.194 [2024-11-02 23:49:31.213223] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.194 [2024-11-02 23:49:31.213365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.194 [2024-11-02 23:49:31.213396] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.194 [2024-11-02 23:49:31.213420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.194 [2024-11-02 23:49:31.213437] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.194 [2024-11-02 23:49:31.213460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.194 [2024-11-02 23:49:31.213477] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:37.194 [2024-11-02 23:49:31.213510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.194 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.195 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.195 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.195 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.195 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.195 "name": "Existed_Raid", 00:09:37.195 "uuid": "51a43247-1f90-458f-8472-e8507cdbd672", 00:09:37.195 "strip_size_kb": 64, 00:09:37.195 "state": "configuring", 00:09:37.195 "raid_level": "concat", 00:09:37.195 "superblock": true, 00:09:37.195 "num_base_bdevs": 4, 00:09:37.195 "num_base_bdevs_discovered": 0, 00:09:37.195 "num_base_bdevs_operational": 4, 00:09:37.195 "base_bdevs_list": [ 00:09:37.195 { 00:09:37.195 "name": "BaseBdev1", 00:09:37.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.195 "is_configured": false, 00:09:37.195 "data_offset": 0, 00:09:37.195 "data_size": 0 00:09:37.195 }, 00:09:37.195 { 00:09:37.195 "name": "BaseBdev2", 00:09:37.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.195 "is_configured": false, 00:09:37.195 "data_offset": 0, 00:09:37.195 "data_size": 0 00:09:37.195 }, 00:09:37.195 { 00:09:37.195 "name": "BaseBdev3", 00:09:37.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.195 "is_configured": false, 00:09:37.195 "data_offset": 0, 00:09:37.195 "data_size": 0 00:09:37.195 }, 00:09:37.195 { 00:09:37.195 "name": "BaseBdev4", 00:09:37.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.195 "is_configured": false, 00:09:37.195 "data_offset": 0, 00:09:37.195 "data_size": 0 00:09:37.195 } 00:09:37.195 ] 00:09:37.195 }' 00:09:37.195 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.195 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.764 [2024-11-02 23:49:31.684339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.764 [2024-11-02 23:49:31.684446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.764 [2024-11-02 23:49:31.696342] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.764 [2024-11-02 23:49:31.696384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.764 [2024-11-02 23:49:31.696393] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.764 [2024-11-02 23:49:31.696418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.764 [2024-11-02 23:49:31.696425] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.764 [2024-11-02 23:49:31.696433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.764 [2024-11-02 23:49:31.696439] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:37.764 [2024-11-02 23:49:31.696448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.764 [2024-11-02 23:49:31.716920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.764 BaseBdev1 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:37.764 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.765 [ 00:09:37.765 { 00:09:37.765 "name": "BaseBdev1", 00:09:37.765 "aliases": [ 00:09:37.765 "86a5fa7a-6c71-4e24-a176-a9adfa49b9d3" 00:09:37.765 ], 00:09:37.765 "product_name": "Malloc disk", 00:09:37.765 "block_size": 512, 00:09:37.765 "num_blocks": 65536, 00:09:37.765 "uuid": "86a5fa7a-6c71-4e24-a176-a9adfa49b9d3", 00:09:37.765 "assigned_rate_limits": { 00:09:37.765 "rw_ios_per_sec": 0, 00:09:37.765 "rw_mbytes_per_sec": 0, 00:09:37.765 "r_mbytes_per_sec": 0, 00:09:37.765 "w_mbytes_per_sec": 0 00:09:37.765 }, 00:09:37.765 "claimed": true, 00:09:37.765 "claim_type": "exclusive_write", 00:09:37.765 "zoned": false, 00:09:37.765 "supported_io_types": { 00:09:37.765 "read": true, 00:09:37.765 "write": true, 00:09:37.765 "unmap": true, 00:09:37.765 "flush": true, 00:09:37.765 "reset": true, 00:09:37.765 "nvme_admin": false, 00:09:37.765 "nvme_io": false, 00:09:37.765 "nvme_io_md": false, 00:09:37.765 "write_zeroes": true, 00:09:37.765 "zcopy": true, 00:09:37.765 "get_zone_info": false, 00:09:37.765 "zone_management": false, 00:09:37.765 "zone_append": false, 00:09:37.765 "compare": false, 00:09:37.765 "compare_and_write": false, 00:09:37.765 "abort": true, 00:09:37.765 "seek_hole": false, 00:09:37.765 "seek_data": false, 00:09:37.765 "copy": true, 00:09:37.765 "nvme_iov_md": false 00:09:37.765 }, 00:09:37.765 "memory_domains": [ 00:09:37.765 { 00:09:37.765 "dma_device_id": "system", 00:09:37.765 "dma_device_type": 1 00:09:37.765 }, 00:09:37.765 { 00:09:37.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.765 "dma_device_type": 2 00:09:37.765 } 00:09:37.765 ], 00:09:37.765 "driver_specific": {} 00:09:37.765 } 00:09:37.765 ] 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.765 "name": "Existed_Raid", 00:09:37.765 "uuid": "f0f8abcf-7c9d-43ea-8143-668d16469317", 00:09:37.765 "strip_size_kb": 64, 00:09:37.765 "state": "configuring", 00:09:37.765 "raid_level": "concat", 00:09:37.765 "superblock": true, 00:09:37.765 "num_base_bdevs": 4, 00:09:37.765 "num_base_bdevs_discovered": 1, 00:09:37.765 "num_base_bdevs_operational": 4, 00:09:37.765 "base_bdevs_list": [ 00:09:37.765 { 00:09:37.765 "name": "BaseBdev1", 00:09:37.765 "uuid": "86a5fa7a-6c71-4e24-a176-a9adfa49b9d3", 00:09:37.765 "is_configured": true, 00:09:37.765 "data_offset": 2048, 00:09:37.765 "data_size": 63488 00:09:37.765 }, 00:09:37.765 { 00:09:37.765 "name": "BaseBdev2", 00:09:37.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.765 "is_configured": false, 00:09:37.765 "data_offset": 0, 00:09:37.765 "data_size": 0 00:09:37.765 }, 00:09:37.765 { 00:09:37.765 "name": "BaseBdev3", 00:09:37.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.765 "is_configured": false, 00:09:37.765 "data_offset": 0, 00:09:37.765 "data_size": 0 00:09:37.765 }, 00:09:37.765 { 00:09:37.765 "name": "BaseBdev4", 00:09:37.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.765 "is_configured": false, 00:09:37.765 "data_offset": 0, 00:09:37.765 "data_size": 0 00:09:37.765 } 00:09:37.765 ] 00:09:37.765 }' 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.765 23:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.334 [2024-11-02 23:49:32.220117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.334 [2024-11-02 23:49:32.220253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.334 [2024-11-02 23:49:32.232129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.334 [2024-11-02 23:49:32.234034] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.334 [2024-11-02 23:49:32.234106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.334 [2024-11-02 23:49:32.234133] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.334 [2024-11-02 23:49:32.234155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.334 [2024-11-02 23:49:32.234173] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:38.334 [2024-11-02 23:49:32.234193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.334 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.334 "name": "Existed_Raid", 00:09:38.334 "uuid": "7e6ba72b-54d5-4e82-917d-209e619bacdf", 00:09:38.334 "strip_size_kb": 64, 00:09:38.334 "state": "configuring", 00:09:38.334 "raid_level": "concat", 00:09:38.334 "superblock": true, 00:09:38.334 "num_base_bdevs": 4, 00:09:38.334 "num_base_bdevs_discovered": 1, 00:09:38.334 "num_base_bdevs_operational": 4, 00:09:38.334 "base_bdevs_list": [ 00:09:38.334 { 00:09:38.334 "name": "BaseBdev1", 00:09:38.334 "uuid": "86a5fa7a-6c71-4e24-a176-a9adfa49b9d3", 00:09:38.334 "is_configured": true, 00:09:38.334 "data_offset": 2048, 00:09:38.334 "data_size": 63488 00:09:38.334 }, 00:09:38.334 { 00:09:38.334 "name": "BaseBdev2", 00:09:38.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.334 "is_configured": false, 00:09:38.334 "data_offset": 0, 00:09:38.334 "data_size": 0 00:09:38.334 }, 00:09:38.334 { 00:09:38.334 "name": "BaseBdev3", 00:09:38.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.334 "is_configured": false, 00:09:38.334 "data_offset": 0, 00:09:38.334 "data_size": 0 00:09:38.334 }, 00:09:38.334 { 00:09:38.334 "name": "BaseBdev4", 00:09:38.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.334 "is_configured": false, 00:09:38.334 "data_offset": 0, 00:09:38.334 "data_size": 0 00:09:38.334 } 00:09:38.335 ] 00:09:38.335 }' 00:09:38.335 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.335 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.594 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.594 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.594 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.594 [2024-11-02 23:49:32.678232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.595 BaseBdev2 00:09:38.595 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.595 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:38.595 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:38.595 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:38.595 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:38.595 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:38.595 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:38.595 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:38.595 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.595 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.854 [ 00:09:38.854 { 00:09:38.854 "name": "BaseBdev2", 00:09:38.854 "aliases": [ 00:09:38.854 "98b51c29-f42e-40d3-8f0e-2e4c1fe25e49" 00:09:38.854 ], 00:09:38.854 "product_name": "Malloc disk", 00:09:38.854 "block_size": 512, 00:09:38.854 "num_blocks": 65536, 00:09:38.854 "uuid": "98b51c29-f42e-40d3-8f0e-2e4c1fe25e49", 00:09:38.854 "assigned_rate_limits": { 00:09:38.854 "rw_ios_per_sec": 0, 00:09:38.854 "rw_mbytes_per_sec": 0, 00:09:38.854 "r_mbytes_per_sec": 0, 00:09:38.854 "w_mbytes_per_sec": 0 00:09:38.854 }, 00:09:38.854 "claimed": true, 00:09:38.854 "claim_type": "exclusive_write", 00:09:38.854 "zoned": false, 00:09:38.854 "supported_io_types": { 00:09:38.854 "read": true, 00:09:38.854 "write": true, 00:09:38.854 "unmap": true, 00:09:38.854 "flush": true, 00:09:38.854 "reset": true, 00:09:38.854 "nvme_admin": false, 00:09:38.854 "nvme_io": false, 00:09:38.854 "nvme_io_md": false, 00:09:38.854 "write_zeroes": true, 00:09:38.854 "zcopy": true, 00:09:38.854 "get_zone_info": false, 00:09:38.854 "zone_management": false, 00:09:38.854 "zone_append": false, 00:09:38.854 "compare": false, 00:09:38.854 "compare_and_write": false, 00:09:38.854 "abort": true, 00:09:38.854 "seek_hole": false, 00:09:38.854 "seek_data": false, 00:09:38.854 "copy": true, 00:09:38.854 "nvme_iov_md": false 00:09:38.854 }, 00:09:38.854 "memory_domains": [ 00:09:38.854 { 00:09:38.854 "dma_device_id": "system", 00:09:38.854 "dma_device_type": 1 00:09:38.854 }, 00:09:38.854 { 00:09:38.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.854 "dma_device_type": 2 00:09:38.854 } 00:09:38.854 ], 00:09:38.854 "driver_specific": {} 00:09:38.854 } 00:09:38.854 ] 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.854 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.855 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.855 "name": "Existed_Raid", 00:09:38.855 "uuid": "7e6ba72b-54d5-4e82-917d-209e619bacdf", 00:09:38.855 "strip_size_kb": 64, 00:09:38.855 "state": "configuring", 00:09:38.855 "raid_level": "concat", 00:09:38.855 "superblock": true, 00:09:38.855 "num_base_bdevs": 4, 00:09:38.855 "num_base_bdevs_discovered": 2, 00:09:38.855 "num_base_bdevs_operational": 4, 00:09:38.855 "base_bdevs_list": [ 00:09:38.855 { 00:09:38.855 "name": "BaseBdev1", 00:09:38.855 "uuid": "86a5fa7a-6c71-4e24-a176-a9adfa49b9d3", 00:09:38.855 "is_configured": true, 00:09:38.855 "data_offset": 2048, 00:09:38.855 "data_size": 63488 00:09:38.855 }, 00:09:38.855 { 00:09:38.855 "name": "BaseBdev2", 00:09:38.855 "uuid": "98b51c29-f42e-40d3-8f0e-2e4c1fe25e49", 00:09:38.855 "is_configured": true, 00:09:38.855 "data_offset": 2048, 00:09:38.855 "data_size": 63488 00:09:38.855 }, 00:09:38.855 { 00:09:38.855 "name": "BaseBdev3", 00:09:38.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.855 "is_configured": false, 00:09:38.855 "data_offset": 0, 00:09:38.855 "data_size": 0 00:09:38.855 }, 00:09:38.855 { 00:09:38.855 "name": "BaseBdev4", 00:09:38.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.855 "is_configured": false, 00:09:38.855 "data_offset": 0, 00:09:38.855 "data_size": 0 00:09:38.855 } 00:09:38.855 ] 00:09:38.855 }' 00:09:38.855 23:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.855 23:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.114 [2024-11-02 23:49:33.191796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.114 BaseBdev3 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.114 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.376 [ 00:09:39.376 { 00:09:39.376 "name": "BaseBdev3", 00:09:39.376 "aliases": [ 00:09:39.376 "85ac7058-cf71-4451-afbd-3e37d0c00bc0" 00:09:39.376 ], 00:09:39.376 "product_name": "Malloc disk", 00:09:39.376 "block_size": 512, 00:09:39.376 "num_blocks": 65536, 00:09:39.376 "uuid": "85ac7058-cf71-4451-afbd-3e37d0c00bc0", 00:09:39.376 "assigned_rate_limits": { 00:09:39.376 "rw_ios_per_sec": 0, 00:09:39.376 "rw_mbytes_per_sec": 0, 00:09:39.376 "r_mbytes_per_sec": 0, 00:09:39.376 "w_mbytes_per_sec": 0 00:09:39.376 }, 00:09:39.376 "claimed": true, 00:09:39.376 "claim_type": "exclusive_write", 00:09:39.376 "zoned": false, 00:09:39.376 "supported_io_types": { 00:09:39.376 "read": true, 00:09:39.376 "write": true, 00:09:39.376 "unmap": true, 00:09:39.376 "flush": true, 00:09:39.376 "reset": true, 00:09:39.376 "nvme_admin": false, 00:09:39.376 "nvme_io": false, 00:09:39.376 "nvme_io_md": false, 00:09:39.376 "write_zeroes": true, 00:09:39.376 "zcopy": true, 00:09:39.376 "get_zone_info": false, 00:09:39.376 "zone_management": false, 00:09:39.376 "zone_append": false, 00:09:39.376 "compare": false, 00:09:39.376 "compare_and_write": false, 00:09:39.376 "abort": true, 00:09:39.376 "seek_hole": false, 00:09:39.376 "seek_data": false, 00:09:39.376 "copy": true, 00:09:39.376 "nvme_iov_md": false 00:09:39.376 }, 00:09:39.376 "memory_domains": [ 00:09:39.376 { 00:09:39.376 "dma_device_id": "system", 00:09:39.376 "dma_device_type": 1 00:09:39.376 }, 00:09:39.376 { 00:09:39.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.376 "dma_device_type": 2 00:09:39.376 } 00:09:39.376 ], 00:09:39.376 "driver_specific": {} 00:09:39.376 } 00:09:39.376 ] 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.376 "name": "Existed_Raid", 00:09:39.376 "uuid": "7e6ba72b-54d5-4e82-917d-209e619bacdf", 00:09:39.376 "strip_size_kb": 64, 00:09:39.376 "state": "configuring", 00:09:39.376 "raid_level": "concat", 00:09:39.376 "superblock": true, 00:09:39.376 "num_base_bdevs": 4, 00:09:39.376 "num_base_bdevs_discovered": 3, 00:09:39.376 "num_base_bdevs_operational": 4, 00:09:39.376 "base_bdevs_list": [ 00:09:39.376 { 00:09:39.376 "name": "BaseBdev1", 00:09:39.376 "uuid": "86a5fa7a-6c71-4e24-a176-a9adfa49b9d3", 00:09:39.376 "is_configured": true, 00:09:39.376 "data_offset": 2048, 00:09:39.376 "data_size": 63488 00:09:39.376 }, 00:09:39.376 { 00:09:39.376 "name": "BaseBdev2", 00:09:39.376 "uuid": "98b51c29-f42e-40d3-8f0e-2e4c1fe25e49", 00:09:39.376 "is_configured": true, 00:09:39.376 "data_offset": 2048, 00:09:39.376 "data_size": 63488 00:09:39.376 }, 00:09:39.376 { 00:09:39.376 "name": "BaseBdev3", 00:09:39.376 "uuid": "85ac7058-cf71-4451-afbd-3e37d0c00bc0", 00:09:39.376 "is_configured": true, 00:09:39.376 "data_offset": 2048, 00:09:39.376 "data_size": 63488 00:09:39.376 }, 00:09:39.376 { 00:09:39.376 "name": "BaseBdev4", 00:09:39.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.376 "is_configured": false, 00:09:39.376 "data_offset": 0, 00:09:39.376 "data_size": 0 00:09:39.376 } 00:09:39.376 ] 00:09:39.376 }' 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.376 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.639 [2024-11-02 23:49:33.689807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:39.639 [2024-11-02 23:49:33.690022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:39.639 [2024-11-02 23:49:33.690036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:39.639 BaseBdev4 00:09:39.639 [2024-11-02 23:49:33.690349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:39.639 [2024-11-02 23:49:33.690499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:39.639 [2024-11-02 23:49:33.690519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:39.639 [2024-11-02 23:49:33.690631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.639 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.639 [ 00:09:39.639 { 00:09:39.639 "name": "BaseBdev4", 00:09:39.639 "aliases": [ 00:09:39.639 "e4c81462-9d51-4f61-b015-934b56c51dcc" 00:09:39.639 ], 00:09:39.639 "product_name": "Malloc disk", 00:09:39.639 "block_size": 512, 00:09:39.639 "num_blocks": 65536, 00:09:39.639 "uuid": "e4c81462-9d51-4f61-b015-934b56c51dcc", 00:09:39.639 "assigned_rate_limits": { 00:09:39.639 "rw_ios_per_sec": 0, 00:09:39.639 "rw_mbytes_per_sec": 0, 00:09:39.639 "r_mbytes_per_sec": 0, 00:09:39.639 "w_mbytes_per_sec": 0 00:09:39.639 }, 00:09:39.639 "claimed": true, 00:09:39.639 "claim_type": "exclusive_write", 00:09:39.639 "zoned": false, 00:09:39.639 "supported_io_types": { 00:09:39.639 "read": true, 00:09:39.639 "write": true, 00:09:39.639 "unmap": true, 00:09:39.639 "flush": true, 00:09:39.640 "reset": true, 00:09:39.640 "nvme_admin": false, 00:09:39.640 "nvme_io": false, 00:09:39.640 "nvme_io_md": false, 00:09:39.640 "write_zeroes": true, 00:09:39.640 "zcopy": true, 00:09:39.640 "get_zone_info": false, 00:09:39.640 "zone_management": false, 00:09:39.640 "zone_append": false, 00:09:39.640 "compare": false, 00:09:39.640 "compare_and_write": false, 00:09:39.640 "abort": true, 00:09:39.640 "seek_hole": false, 00:09:39.640 "seek_data": false, 00:09:39.640 "copy": true, 00:09:39.640 "nvme_iov_md": false 00:09:39.640 }, 00:09:39.640 "memory_domains": [ 00:09:39.640 { 00:09:39.640 "dma_device_id": "system", 00:09:39.640 "dma_device_type": 1 00:09:39.640 }, 00:09:39.640 { 00:09:39.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.640 "dma_device_type": 2 00:09:39.640 } 00:09:39.640 ], 00:09:39.640 "driver_specific": {} 00:09:39.640 } 00:09:39.640 ] 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.640 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.899 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.899 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.899 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.899 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.899 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.899 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.899 "name": "Existed_Raid", 00:09:39.899 "uuid": "7e6ba72b-54d5-4e82-917d-209e619bacdf", 00:09:39.899 "strip_size_kb": 64, 00:09:39.899 "state": "online", 00:09:39.899 "raid_level": "concat", 00:09:39.900 "superblock": true, 00:09:39.900 "num_base_bdevs": 4, 00:09:39.900 "num_base_bdevs_discovered": 4, 00:09:39.900 "num_base_bdevs_operational": 4, 00:09:39.900 "base_bdevs_list": [ 00:09:39.900 { 00:09:39.900 "name": "BaseBdev1", 00:09:39.900 "uuid": "86a5fa7a-6c71-4e24-a176-a9adfa49b9d3", 00:09:39.900 "is_configured": true, 00:09:39.900 "data_offset": 2048, 00:09:39.900 "data_size": 63488 00:09:39.900 }, 00:09:39.900 { 00:09:39.900 "name": "BaseBdev2", 00:09:39.900 "uuid": "98b51c29-f42e-40d3-8f0e-2e4c1fe25e49", 00:09:39.900 "is_configured": true, 00:09:39.900 "data_offset": 2048, 00:09:39.900 "data_size": 63488 00:09:39.900 }, 00:09:39.900 { 00:09:39.900 "name": "BaseBdev3", 00:09:39.900 "uuid": "85ac7058-cf71-4451-afbd-3e37d0c00bc0", 00:09:39.900 "is_configured": true, 00:09:39.900 "data_offset": 2048, 00:09:39.900 "data_size": 63488 00:09:39.900 }, 00:09:39.900 { 00:09:39.900 "name": "BaseBdev4", 00:09:39.900 "uuid": "e4c81462-9d51-4f61-b015-934b56c51dcc", 00:09:39.900 "is_configured": true, 00:09:39.900 "data_offset": 2048, 00:09:39.900 "data_size": 63488 00:09:39.900 } 00:09:39.900 ] 00:09:39.900 }' 00:09:39.900 23:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.900 23:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.159 [2024-11-02 23:49:34.193321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.159 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.159 "name": "Existed_Raid", 00:09:40.159 "aliases": [ 00:09:40.159 "7e6ba72b-54d5-4e82-917d-209e619bacdf" 00:09:40.159 ], 00:09:40.159 "product_name": "Raid Volume", 00:09:40.159 "block_size": 512, 00:09:40.159 "num_blocks": 253952, 00:09:40.159 "uuid": "7e6ba72b-54d5-4e82-917d-209e619bacdf", 00:09:40.159 "assigned_rate_limits": { 00:09:40.159 "rw_ios_per_sec": 0, 00:09:40.159 "rw_mbytes_per_sec": 0, 00:09:40.159 "r_mbytes_per_sec": 0, 00:09:40.159 "w_mbytes_per_sec": 0 00:09:40.159 }, 00:09:40.159 "claimed": false, 00:09:40.159 "zoned": false, 00:09:40.159 "supported_io_types": { 00:09:40.159 "read": true, 00:09:40.159 "write": true, 00:09:40.159 "unmap": true, 00:09:40.159 "flush": true, 00:09:40.159 "reset": true, 00:09:40.159 "nvme_admin": false, 00:09:40.159 "nvme_io": false, 00:09:40.159 "nvme_io_md": false, 00:09:40.159 "write_zeroes": true, 00:09:40.159 "zcopy": false, 00:09:40.159 "get_zone_info": false, 00:09:40.159 "zone_management": false, 00:09:40.159 "zone_append": false, 00:09:40.159 "compare": false, 00:09:40.159 "compare_and_write": false, 00:09:40.159 "abort": false, 00:09:40.159 "seek_hole": false, 00:09:40.159 "seek_data": false, 00:09:40.159 "copy": false, 00:09:40.159 "nvme_iov_md": false 00:09:40.159 }, 00:09:40.159 "memory_domains": [ 00:09:40.159 { 00:09:40.159 "dma_device_id": "system", 00:09:40.159 "dma_device_type": 1 00:09:40.159 }, 00:09:40.159 { 00:09:40.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.159 "dma_device_type": 2 00:09:40.159 }, 00:09:40.159 { 00:09:40.160 "dma_device_id": "system", 00:09:40.160 "dma_device_type": 1 00:09:40.160 }, 00:09:40.160 { 00:09:40.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.160 "dma_device_type": 2 00:09:40.160 }, 00:09:40.160 { 00:09:40.160 "dma_device_id": "system", 00:09:40.160 "dma_device_type": 1 00:09:40.160 }, 00:09:40.160 { 00:09:40.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.160 "dma_device_type": 2 00:09:40.160 }, 00:09:40.160 { 00:09:40.160 "dma_device_id": "system", 00:09:40.160 "dma_device_type": 1 00:09:40.160 }, 00:09:40.160 { 00:09:40.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.160 "dma_device_type": 2 00:09:40.160 } 00:09:40.160 ], 00:09:40.160 "driver_specific": { 00:09:40.160 "raid": { 00:09:40.160 "uuid": "7e6ba72b-54d5-4e82-917d-209e619bacdf", 00:09:40.160 "strip_size_kb": 64, 00:09:40.160 "state": "online", 00:09:40.160 "raid_level": "concat", 00:09:40.160 "superblock": true, 00:09:40.160 "num_base_bdevs": 4, 00:09:40.160 "num_base_bdevs_discovered": 4, 00:09:40.160 "num_base_bdevs_operational": 4, 00:09:40.160 "base_bdevs_list": [ 00:09:40.160 { 00:09:40.160 "name": "BaseBdev1", 00:09:40.160 "uuid": "86a5fa7a-6c71-4e24-a176-a9adfa49b9d3", 00:09:40.160 "is_configured": true, 00:09:40.160 "data_offset": 2048, 00:09:40.160 "data_size": 63488 00:09:40.160 }, 00:09:40.160 { 00:09:40.160 "name": "BaseBdev2", 00:09:40.160 "uuid": "98b51c29-f42e-40d3-8f0e-2e4c1fe25e49", 00:09:40.160 "is_configured": true, 00:09:40.160 "data_offset": 2048, 00:09:40.160 "data_size": 63488 00:09:40.160 }, 00:09:40.160 { 00:09:40.160 "name": "BaseBdev3", 00:09:40.160 "uuid": "85ac7058-cf71-4451-afbd-3e37d0c00bc0", 00:09:40.160 "is_configured": true, 00:09:40.160 "data_offset": 2048, 00:09:40.160 "data_size": 63488 00:09:40.160 }, 00:09:40.160 { 00:09:40.160 "name": "BaseBdev4", 00:09:40.160 "uuid": "e4c81462-9d51-4f61-b015-934b56c51dcc", 00:09:40.160 "is_configured": true, 00:09:40.160 "data_offset": 2048, 00:09:40.160 "data_size": 63488 00:09:40.160 } 00:09:40.160 ] 00:09:40.160 } 00:09:40.160 } 00:09:40.160 }' 00:09:40.160 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:40.420 BaseBdev2 00:09:40.420 BaseBdev3 00:09:40.420 BaseBdev4' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.420 [2024-11-02 23:49:34.480546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:40.420 [2024-11-02 23:49:34.480577] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.420 [2024-11-02 23:49:34.480640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:40.420 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.421 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.680 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.680 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.680 "name": "Existed_Raid", 00:09:40.680 "uuid": "7e6ba72b-54d5-4e82-917d-209e619bacdf", 00:09:40.680 "strip_size_kb": 64, 00:09:40.680 "state": "offline", 00:09:40.680 "raid_level": "concat", 00:09:40.680 "superblock": true, 00:09:40.680 "num_base_bdevs": 4, 00:09:40.680 "num_base_bdevs_discovered": 3, 00:09:40.680 "num_base_bdevs_operational": 3, 00:09:40.680 "base_bdevs_list": [ 00:09:40.680 { 00:09:40.680 "name": null, 00:09:40.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.680 "is_configured": false, 00:09:40.680 "data_offset": 0, 00:09:40.680 "data_size": 63488 00:09:40.680 }, 00:09:40.680 { 00:09:40.680 "name": "BaseBdev2", 00:09:40.680 "uuid": "98b51c29-f42e-40d3-8f0e-2e4c1fe25e49", 00:09:40.680 "is_configured": true, 00:09:40.680 "data_offset": 2048, 00:09:40.680 "data_size": 63488 00:09:40.680 }, 00:09:40.680 { 00:09:40.680 "name": "BaseBdev3", 00:09:40.680 "uuid": "85ac7058-cf71-4451-afbd-3e37d0c00bc0", 00:09:40.680 "is_configured": true, 00:09:40.680 "data_offset": 2048, 00:09:40.680 "data_size": 63488 00:09:40.680 }, 00:09:40.680 { 00:09:40.680 "name": "BaseBdev4", 00:09:40.680 "uuid": "e4c81462-9d51-4f61-b015-934b56c51dcc", 00:09:40.680 "is_configured": true, 00:09:40.680 "data_offset": 2048, 00:09:40.680 "data_size": 63488 00:09:40.680 } 00:09:40.680 ] 00:09:40.680 }' 00:09:40.680 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.680 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.939 [2024-11-02 23:49:34.987232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.939 23:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.939 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.939 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.939 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.939 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.939 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.199 [2024-11-02 23:49:35.054102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.199 [2024-11-02 23:49:35.121111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:41.199 [2024-11-02 23:49:35.121205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.199 BaseBdev2 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.199 [ 00:09:41.199 { 00:09:41.199 "name": "BaseBdev2", 00:09:41.199 "aliases": [ 00:09:41.199 "37343625-75b0-4be7-b2dc-0aceb7e83e05" 00:09:41.199 ], 00:09:41.199 "product_name": "Malloc disk", 00:09:41.199 "block_size": 512, 00:09:41.199 "num_blocks": 65536, 00:09:41.199 "uuid": "37343625-75b0-4be7-b2dc-0aceb7e83e05", 00:09:41.199 "assigned_rate_limits": { 00:09:41.199 "rw_ios_per_sec": 0, 00:09:41.199 "rw_mbytes_per_sec": 0, 00:09:41.199 "r_mbytes_per_sec": 0, 00:09:41.199 "w_mbytes_per_sec": 0 00:09:41.199 }, 00:09:41.199 "claimed": false, 00:09:41.199 "zoned": false, 00:09:41.199 "supported_io_types": { 00:09:41.199 "read": true, 00:09:41.199 "write": true, 00:09:41.199 "unmap": true, 00:09:41.199 "flush": true, 00:09:41.199 "reset": true, 00:09:41.199 "nvme_admin": false, 00:09:41.199 "nvme_io": false, 00:09:41.199 "nvme_io_md": false, 00:09:41.199 "write_zeroes": true, 00:09:41.199 "zcopy": true, 00:09:41.199 "get_zone_info": false, 00:09:41.199 "zone_management": false, 00:09:41.199 "zone_append": false, 00:09:41.199 "compare": false, 00:09:41.199 "compare_and_write": false, 00:09:41.199 "abort": true, 00:09:41.199 "seek_hole": false, 00:09:41.199 "seek_data": false, 00:09:41.199 "copy": true, 00:09:41.199 "nvme_iov_md": false 00:09:41.199 }, 00:09:41.199 "memory_domains": [ 00:09:41.199 { 00:09:41.199 "dma_device_id": "system", 00:09:41.199 "dma_device_type": 1 00:09:41.199 }, 00:09:41.199 { 00:09:41.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.199 "dma_device_type": 2 00:09:41.199 } 00:09:41.199 ], 00:09:41.199 "driver_specific": {} 00:09:41.199 } 00:09:41.199 ] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.199 BaseBdev3 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.199 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.199 [ 00:09:41.199 { 00:09:41.199 "name": "BaseBdev3", 00:09:41.199 "aliases": [ 00:09:41.199 "d352221d-7121-4783-972c-ed4404dc744b" 00:09:41.199 ], 00:09:41.199 "product_name": "Malloc disk", 00:09:41.199 "block_size": 512, 00:09:41.199 "num_blocks": 65536, 00:09:41.199 "uuid": "d352221d-7121-4783-972c-ed4404dc744b", 00:09:41.199 "assigned_rate_limits": { 00:09:41.199 "rw_ios_per_sec": 0, 00:09:41.199 "rw_mbytes_per_sec": 0, 00:09:41.199 "r_mbytes_per_sec": 0, 00:09:41.199 "w_mbytes_per_sec": 0 00:09:41.199 }, 00:09:41.199 "claimed": false, 00:09:41.199 "zoned": false, 00:09:41.199 "supported_io_types": { 00:09:41.199 "read": true, 00:09:41.199 "write": true, 00:09:41.199 "unmap": true, 00:09:41.199 "flush": true, 00:09:41.199 "reset": true, 00:09:41.199 "nvme_admin": false, 00:09:41.199 "nvme_io": false, 00:09:41.199 "nvme_io_md": false, 00:09:41.199 "write_zeroes": true, 00:09:41.199 "zcopy": true, 00:09:41.200 "get_zone_info": false, 00:09:41.200 "zone_management": false, 00:09:41.200 "zone_append": false, 00:09:41.200 "compare": false, 00:09:41.200 "compare_and_write": false, 00:09:41.200 "abort": true, 00:09:41.200 "seek_hole": false, 00:09:41.200 "seek_data": false, 00:09:41.200 "copy": true, 00:09:41.200 "nvme_iov_md": false 00:09:41.200 }, 00:09:41.200 "memory_domains": [ 00:09:41.200 { 00:09:41.200 "dma_device_id": "system", 00:09:41.200 "dma_device_type": 1 00:09:41.200 }, 00:09:41.200 { 00:09:41.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.200 "dma_device_type": 2 00:09:41.200 } 00:09:41.200 ], 00:09:41.200 "driver_specific": {} 00:09:41.200 } 00:09:41.200 ] 00:09:41.200 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.200 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:41.200 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.200 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.459 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:41.459 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.460 BaseBdev4 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.460 [ 00:09:41.460 { 00:09:41.460 "name": "BaseBdev4", 00:09:41.460 "aliases": [ 00:09:41.460 "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f" 00:09:41.460 ], 00:09:41.460 "product_name": "Malloc disk", 00:09:41.460 "block_size": 512, 00:09:41.460 "num_blocks": 65536, 00:09:41.460 "uuid": "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f", 00:09:41.460 "assigned_rate_limits": { 00:09:41.460 "rw_ios_per_sec": 0, 00:09:41.460 "rw_mbytes_per_sec": 0, 00:09:41.460 "r_mbytes_per_sec": 0, 00:09:41.460 "w_mbytes_per_sec": 0 00:09:41.460 }, 00:09:41.460 "claimed": false, 00:09:41.460 "zoned": false, 00:09:41.460 "supported_io_types": { 00:09:41.460 "read": true, 00:09:41.460 "write": true, 00:09:41.460 "unmap": true, 00:09:41.460 "flush": true, 00:09:41.460 "reset": true, 00:09:41.460 "nvme_admin": false, 00:09:41.460 "nvme_io": false, 00:09:41.460 "nvme_io_md": false, 00:09:41.460 "write_zeroes": true, 00:09:41.460 "zcopy": true, 00:09:41.460 "get_zone_info": false, 00:09:41.460 "zone_management": false, 00:09:41.460 "zone_append": false, 00:09:41.460 "compare": false, 00:09:41.460 "compare_and_write": false, 00:09:41.460 "abort": true, 00:09:41.460 "seek_hole": false, 00:09:41.460 "seek_data": false, 00:09:41.460 "copy": true, 00:09:41.460 "nvme_iov_md": false 00:09:41.460 }, 00:09:41.460 "memory_domains": [ 00:09:41.460 { 00:09:41.460 "dma_device_id": "system", 00:09:41.460 "dma_device_type": 1 00:09:41.460 }, 00:09:41.460 { 00:09:41.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.460 "dma_device_type": 2 00:09:41.460 } 00:09:41.460 ], 00:09:41.460 "driver_specific": {} 00:09:41.460 } 00:09:41.460 ] 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.460 [2024-11-02 23:49:35.349892] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.460 [2024-11-02 23:49:35.350026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.460 [2024-11-02 23:49:35.350090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.460 [2024-11-02 23:49:35.351946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.460 [2024-11-02 23:49:35.352032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.460 "name": "Existed_Raid", 00:09:41.460 "uuid": "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c", 00:09:41.460 "strip_size_kb": 64, 00:09:41.460 "state": "configuring", 00:09:41.460 "raid_level": "concat", 00:09:41.460 "superblock": true, 00:09:41.460 "num_base_bdevs": 4, 00:09:41.460 "num_base_bdevs_discovered": 3, 00:09:41.460 "num_base_bdevs_operational": 4, 00:09:41.460 "base_bdevs_list": [ 00:09:41.460 { 00:09:41.460 "name": "BaseBdev1", 00:09:41.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.460 "is_configured": false, 00:09:41.460 "data_offset": 0, 00:09:41.460 "data_size": 0 00:09:41.460 }, 00:09:41.460 { 00:09:41.460 "name": "BaseBdev2", 00:09:41.460 "uuid": "37343625-75b0-4be7-b2dc-0aceb7e83e05", 00:09:41.460 "is_configured": true, 00:09:41.460 "data_offset": 2048, 00:09:41.460 "data_size": 63488 00:09:41.460 }, 00:09:41.460 { 00:09:41.460 "name": "BaseBdev3", 00:09:41.460 "uuid": "d352221d-7121-4783-972c-ed4404dc744b", 00:09:41.460 "is_configured": true, 00:09:41.460 "data_offset": 2048, 00:09:41.460 "data_size": 63488 00:09:41.460 }, 00:09:41.460 { 00:09:41.460 "name": "BaseBdev4", 00:09:41.460 "uuid": "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f", 00:09:41.460 "is_configured": true, 00:09:41.460 "data_offset": 2048, 00:09:41.460 "data_size": 63488 00:09:41.460 } 00:09:41.460 ] 00:09:41.460 }' 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.460 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.720 [2024-11-02 23:49:35.769200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.720 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.979 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.979 "name": "Existed_Raid", 00:09:41.979 "uuid": "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c", 00:09:41.979 "strip_size_kb": 64, 00:09:41.979 "state": "configuring", 00:09:41.979 "raid_level": "concat", 00:09:41.979 "superblock": true, 00:09:41.979 "num_base_bdevs": 4, 00:09:41.979 "num_base_bdevs_discovered": 2, 00:09:41.979 "num_base_bdevs_operational": 4, 00:09:41.979 "base_bdevs_list": [ 00:09:41.979 { 00:09:41.979 "name": "BaseBdev1", 00:09:41.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.979 "is_configured": false, 00:09:41.979 "data_offset": 0, 00:09:41.979 "data_size": 0 00:09:41.979 }, 00:09:41.979 { 00:09:41.979 "name": null, 00:09:41.979 "uuid": "37343625-75b0-4be7-b2dc-0aceb7e83e05", 00:09:41.979 "is_configured": false, 00:09:41.979 "data_offset": 0, 00:09:41.979 "data_size": 63488 00:09:41.979 }, 00:09:41.979 { 00:09:41.979 "name": "BaseBdev3", 00:09:41.979 "uuid": "d352221d-7121-4783-972c-ed4404dc744b", 00:09:41.979 "is_configured": true, 00:09:41.979 "data_offset": 2048, 00:09:41.979 "data_size": 63488 00:09:41.979 }, 00:09:41.979 { 00:09:41.979 "name": "BaseBdev4", 00:09:41.979 "uuid": "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f", 00:09:41.979 "is_configured": true, 00:09:41.979 "data_offset": 2048, 00:09:41.979 "data_size": 63488 00:09:41.979 } 00:09:41.979 ] 00:09:41.979 }' 00:09:41.979 23:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.979 23:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.239 [2024-11-02 23:49:36.319127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.239 BaseBdev1 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.239 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.240 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.500 [ 00:09:42.500 { 00:09:42.500 "name": "BaseBdev1", 00:09:42.500 "aliases": [ 00:09:42.500 "20c6a118-2b74-4783-a757-7d71d66bba49" 00:09:42.500 ], 00:09:42.500 "product_name": "Malloc disk", 00:09:42.500 "block_size": 512, 00:09:42.500 "num_blocks": 65536, 00:09:42.500 "uuid": "20c6a118-2b74-4783-a757-7d71d66bba49", 00:09:42.500 "assigned_rate_limits": { 00:09:42.500 "rw_ios_per_sec": 0, 00:09:42.500 "rw_mbytes_per_sec": 0, 00:09:42.500 "r_mbytes_per_sec": 0, 00:09:42.500 "w_mbytes_per_sec": 0 00:09:42.500 }, 00:09:42.500 "claimed": true, 00:09:42.500 "claim_type": "exclusive_write", 00:09:42.500 "zoned": false, 00:09:42.500 "supported_io_types": { 00:09:42.500 "read": true, 00:09:42.500 "write": true, 00:09:42.500 "unmap": true, 00:09:42.500 "flush": true, 00:09:42.500 "reset": true, 00:09:42.500 "nvme_admin": false, 00:09:42.500 "nvme_io": false, 00:09:42.500 "nvme_io_md": false, 00:09:42.500 "write_zeroes": true, 00:09:42.500 "zcopy": true, 00:09:42.500 "get_zone_info": false, 00:09:42.500 "zone_management": false, 00:09:42.500 "zone_append": false, 00:09:42.500 "compare": false, 00:09:42.500 "compare_and_write": false, 00:09:42.500 "abort": true, 00:09:42.500 "seek_hole": false, 00:09:42.500 "seek_data": false, 00:09:42.500 "copy": true, 00:09:42.500 "nvme_iov_md": false 00:09:42.500 }, 00:09:42.500 "memory_domains": [ 00:09:42.500 { 00:09:42.500 "dma_device_id": "system", 00:09:42.500 "dma_device_type": 1 00:09:42.500 }, 00:09:42.500 { 00:09:42.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.500 "dma_device_type": 2 00:09:42.500 } 00:09:42.500 ], 00:09:42.500 "driver_specific": {} 00:09:42.500 } 00:09:42.500 ] 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.500 "name": "Existed_Raid", 00:09:42.500 "uuid": "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c", 00:09:42.500 "strip_size_kb": 64, 00:09:42.500 "state": "configuring", 00:09:42.500 "raid_level": "concat", 00:09:42.500 "superblock": true, 00:09:42.500 "num_base_bdevs": 4, 00:09:42.500 "num_base_bdevs_discovered": 3, 00:09:42.500 "num_base_bdevs_operational": 4, 00:09:42.500 "base_bdevs_list": [ 00:09:42.500 { 00:09:42.500 "name": "BaseBdev1", 00:09:42.500 "uuid": "20c6a118-2b74-4783-a757-7d71d66bba49", 00:09:42.500 "is_configured": true, 00:09:42.500 "data_offset": 2048, 00:09:42.500 "data_size": 63488 00:09:42.500 }, 00:09:42.500 { 00:09:42.500 "name": null, 00:09:42.500 "uuid": "37343625-75b0-4be7-b2dc-0aceb7e83e05", 00:09:42.500 "is_configured": false, 00:09:42.500 "data_offset": 0, 00:09:42.500 "data_size": 63488 00:09:42.500 }, 00:09:42.500 { 00:09:42.500 "name": "BaseBdev3", 00:09:42.500 "uuid": "d352221d-7121-4783-972c-ed4404dc744b", 00:09:42.500 "is_configured": true, 00:09:42.500 "data_offset": 2048, 00:09:42.500 "data_size": 63488 00:09:42.500 }, 00:09:42.500 { 00:09:42.500 "name": "BaseBdev4", 00:09:42.500 "uuid": "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f", 00:09:42.500 "is_configured": true, 00:09:42.500 "data_offset": 2048, 00:09:42.500 "data_size": 63488 00:09:42.500 } 00:09:42.500 ] 00:09:42.500 }' 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.500 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.760 [2024-11-02 23:49:36.830391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.760 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.019 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.019 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.019 "name": "Existed_Raid", 00:09:43.019 "uuid": "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c", 00:09:43.019 "strip_size_kb": 64, 00:09:43.019 "state": "configuring", 00:09:43.019 "raid_level": "concat", 00:09:43.019 "superblock": true, 00:09:43.019 "num_base_bdevs": 4, 00:09:43.019 "num_base_bdevs_discovered": 2, 00:09:43.019 "num_base_bdevs_operational": 4, 00:09:43.019 "base_bdevs_list": [ 00:09:43.019 { 00:09:43.019 "name": "BaseBdev1", 00:09:43.019 "uuid": "20c6a118-2b74-4783-a757-7d71d66bba49", 00:09:43.019 "is_configured": true, 00:09:43.019 "data_offset": 2048, 00:09:43.019 "data_size": 63488 00:09:43.019 }, 00:09:43.019 { 00:09:43.019 "name": null, 00:09:43.019 "uuid": "37343625-75b0-4be7-b2dc-0aceb7e83e05", 00:09:43.019 "is_configured": false, 00:09:43.019 "data_offset": 0, 00:09:43.019 "data_size": 63488 00:09:43.019 }, 00:09:43.019 { 00:09:43.019 "name": null, 00:09:43.019 "uuid": "d352221d-7121-4783-972c-ed4404dc744b", 00:09:43.019 "is_configured": false, 00:09:43.019 "data_offset": 0, 00:09:43.019 "data_size": 63488 00:09:43.019 }, 00:09:43.019 { 00:09:43.019 "name": "BaseBdev4", 00:09:43.019 "uuid": "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f", 00:09:43.019 "is_configured": true, 00:09:43.019 "data_offset": 2048, 00:09:43.019 "data_size": 63488 00:09:43.019 } 00:09:43.019 ] 00:09:43.019 }' 00:09:43.019 23:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.019 23:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.278 [2024-11-02 23:49:37.345544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.278 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.547 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.547 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.547 "name": "Existed_Raid", 00:09:43.547 "uuid": "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c", 00:09:43.547 "strip_size_kb": 64, 00:09:43.547 "state": "configuring", 00:09:43.547 "raid_level": "concat", 00:09:43.547 "superblock": true, 00:09:43.547 "num_base_bdevs": 4, 00:09:43.547 "num_base_bdevs_discovered": 3, 00:09:43.547 "num_base_bdevs_operational": 4, 00:09:43.547 "base_bdevs_list": [ 00:09:43.547 { 00:09:43.547 "name": "BaseBdev1", 00:09:43.547 "uuid": "20c6a118-2b74-4783-a757-7d71d66bba49", 00:09:43.547 "is_configured": true, 00:09:43.547 "data_offset": 2048, 00:09:43.547 "data_size": 63488 00:09:43.547 }, 00:09:43.547 { 00:09:43.547 "name": null, 00:09:43.547 "uuid": "37343625-75b0-4be7-b2dc-0aceb7e83e05", 00:09:43.547 "is_configured": false, 00:09:43.547 "data_offset": 0, 00:09:43.547 "data_size": 63488 00:09:43.547 }, 00:09:43.547 { 00:09:43.547 "name": "BaseBdev3", 00:09:43.547 "uuid": "d352221d-7121-4783-972c-ed4404dc744b", 00:09:43.547 "is_configured": true, 00:09:43.547 "data_offset": 2048, 00:09:43.547 "data_size": 63488 00:09:43.547 }, 00:09:43.547 { 00:09:43.547 "name": "BaseBdev4", 00:09:43.547 "uuid": "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f", 00:09:43.547 "is_configured": true, 00:09:43.547 "data_offset": 2048, 00:09:43.547 "data_size": 63488 00:09:43.547 } 00:09:43.547 ] 00:09:43.547 }' 00:09:43.547 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.547 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.811 [2024-11-02 23:49:37.840948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.811 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.070 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.070 "name": "Existed_Raid", 00:09:44.070 "uuid": "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c", 00:09:44.070 "strip_size_kb": 64, 00:09:44.070 "state": "configuring", 00:09:44.070 "raid_level": "concat", 00:09:44.070 "superblock": true, 00:09:44.070 "num_base_bdevs": 4, 00:09:44.070 "num_base_bdevs_discovered": 2, 00:09:44.070 "num_base_bdevs_operational": 4, 00:09:44.070 "base_bdevs_list": [ 00:09:44.070 { 00:09:44.070 "name": null, 00:09:44.070 "uuid": "20c6a118-2b74-4783-a757-7d71d66bba49", 00:09:44.070 "is_configured": false, 00:09:44.070 "data_offset": 0, 00:09:44.070 "data_size": 63488 00:09:44.070 }, 00:09:44.070 { 00:09:44.070 "name": null, 00:09:44.070 "uuid": "37343625-75b0-4be7-b2dc-0aceb7e83e05", 00:09:44.070 "is_configured": false, 00:09:44.070 "data_offset": 0, 00:09:44.070 "data_size": 63488 00:09:44.070 }, 00:09:44.070 { 00:09:44.070 "name": "BaseBdev3", 00:09:44.070 "uuid": "d352221d-7121-4783-972c-ed4404dc744b", 00:09:44.070 "is_configured": true, 00:09:44.070 "data_offset": 2048, 00:09:44.070 "data_size": 63488 00:09:44.070 }, 00:09:44.070 { 00:09:44.070 "name": "BaseBdev4", 00:09:44.070 "uuid": "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f", 00:09:44.070 "is_configured": true, 00:09:44.070 "data_offset": 2048, 00:09:44.070 "data_size": 63488 00:09:44.070 } 00:09:44.070 ] 00:09:44.070 }' 00:09:44.070 23:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.070 23:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.339 [2024-11-02 23:49:38.363007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.339 "name": "Existed_Raid", 00:09:44.339 "uuid": "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c", 00:09:44.339 "strip_size_kb": 64, 00:09:44.339 "state": "configuring", 00:09:44.339 "raid_level": "concat", 00:09:44.339 "superblock": true, 00:09:44.339 "num_base_bdevs": 4, 00:09:44.339 "num_base_bdevs_discovered": 3, 00:09:44.339 "num_base_bdevs_operational": 4, 00:09:44.339 "base_bdevs_list": [ 00:09:44.339 { 00:09:44.339 "name": null, 00:09:44.339 "uuid": "20c6a118-2b74-4783-a757-7d71d66bba49", 00:09:44.339 "is_configured": false, 00:09:44.339 "data_offset": 0, 00:09:44.339 "data_size": 63488 00:09:44.339 }, 00:09:44.339 { 00:09:44.339 "name": "BaseBdev2", 00:09:44.339 "uuid": "37343625-75b0-4be7-b2dc-0aceb7e83e05", 00:09:44.339 "is_configured": true, 00:09:44.339 "data_offset": 2048, 00:09:44.339 "data_size": 63488 00:09:44.339 }, 00:09:44.339 { 00:09:44.339 "name": "BaseBdev3", 00:09:44.339 "uuid": "d352221d-7121-4783-972c-ed4404dc744b", 00:09:44.339 "is_configured": true, 00:09:44.339 "data_offset": 2048, 00:09:44.339 "data_size": 63488 00:09:44.339 }, 00:09:44.339 { 00:09:44.339 "name": "BaseBdev4", 00:09:44.339 "uuid": "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f", 00:09:44.339 "is_configured": true, 00:09:44.339 "data_offset": 2048, 00:09:44.339 "data_size": 63488 00:09:44.339 } 00:09:44.339 ] 00:09:44.339 }' 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.339 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 20c6a118-2b74-4783-a757-7d71d66bba49 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.925 [2024-11-02 23:49:38.973335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:44.925 [2024-11-02 23:49:38.973650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:44.925 [2024-11-02 23:49:38.973700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:44.925 [2024-11-02 23:49:38.974050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:44.925 NewBaseBdev 00:09:44.925 [2024-11-02 23:49:38.974227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:44.925 [2024-11-02 23:49:38.974246] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:44.925 [2024-11-02 23:49:38.974381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:44.925 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:44.926 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:44.926 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:44.926 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:44.926 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.926 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.926 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.926 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:44.926 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.926 23:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.926 [ 00:09:44.926 { 00:09:44.926 "name": "NewBaseBdev", 00:09:44.926 "aliases": [ 00:09:44.926 "20c6a118-2b74-4783-a757-7d71d66bba49" 00:09:44.926 ], 00:09:44.926 "product_name": "Malloc disk", 00:09:44.926 "block_size": 512, 00:09:44.926 "num_blocks": 65536, 00:09:44.926 "uuid": "20c6a118-2b74-4783-a757-7d71d66bba49", 00:09:44.926 "assigned_rate_limits": { 00:09:44.926 "rw_ios_per_sec": 0, 00:09:44.926 "rw_mbytes_per_sec": 0, 00:09:44.926 "r_mbytes_per_sec": 0, 00:09:44.926 "w_mbytes_per_sec": 0 00:09:44.926 }, 00:09:44.926 "claimed": true, 00:09:44.926 "claim_type": "exclusive_write", 00:09:44.926 "zoned": false, 00:09:44.926 "supported_io_types": { 00:09:44.926 "read": true, 00:09:44.926 "write": true, 00:09:44.926 "unmap": true, 00:09:44.926 "flush": true, 00:09:44.926 "reset": true, 00:09:44.926 "nvme_admin": false, 00:09:44.926 "nvme_io": false, 00:09:44.926 "nvme_io_md": false, 00:09:44.926 "write_zeroes": true, 00:09:44.926 "zcopy": true, 00:09:44.926 "get_zone_info": false, 00:09:44.926 "zone_management": false, 00:09:44.926 "zone_append": false, 00:09:44.926 "compare": false, 00:09:44.926 "compare_and_write": false, 00:09:44.926 "abort": true, 00:09:44.926 "seek_hole": false, 00:09:44.926 "seek_data": false, 00:09:44.926 "copy": true, 00:09:44.926 "nvme_iov_md": false 00:09:44.926 }, 00:09:44.926 "memory_domains": [ 00:09:44.926 { 00:09:44.926 "dma_device_id": "system", 00:09:44.926 "dma_device_type": 1 00:09:44.926 }, 00:09:44.926 { 00:09:44.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.926 "dma_device_type": 2 00:09:44.926 } 00:09:44.926 ], 00:09:44.926 "driver_specific": {} 00:09:44.926 } 00:09:44.926 ] 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.926 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.194 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.194 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.194 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.194 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.194 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.194 "name": "Existed_Raid", 00:09:45.194 "uuid": "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c", 00:09:45.194 "strip_size_kb": 64, 00:09:45.194 "state": "online", 00:09:45.194 "raid_level": "concat", 00:09:45.194 "superblock": true, 00:09:45.194 "num_base_bdevs": 4, 00:09:45.194 "num_base_bdevs_discovered": 4, 00:09:45.194 "num_base_bdevs_operational": 4, 00:09:45.194 "base_bdevs_list": [ 00:09:45.194 { 00:09:45.194 "name": "NewBaseBdev", 00:09:45.194 "uuid": "20c6a118-2b74-4783-a757-7d71d66bba49", 00:09:45.194 "is_configured": true, 00:09:45.194 "data_offset": 2048, 00:09:45.194 "data_size": 63488 00:09:45.194 }, 00:09:45.194 { 00:09:45.194 "name": "BaseBdev2", 00:09:45.194 "uuid": "37343625-75b0-4be7-b2dc-0aceb7e83e05", 00:09:45.194 "is_configured": true, 00:09:45.194 "data_offset": 2048, 00:09:45.194 "data_size": 63488 00:09:45.194 }, 00:09:45.194 { 00:09:45.194 "name": "BaseBdev3", 00:09:45.194 "uuid": "d352221d-7121-4783-972c-ed4404dc744b", 00:09:45.194 "is_configured": true, 00:09:45.194 "data_offset": 2048, 00:09:45.194 "data_size": 63488 00:09:45.194 }, 00:09:45.194 { 00:09:45.194 "name": "BaseBdev4", 00:09:45.194 "uuid": "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f", 00:09:45.194 "is_configured": true, 00:09:45.194 "data_offset": 2048, 00:09:45.194 "data_size": 63488 00:09:45.194 } 00:09:45.194 ] 00:09:45.194 }' 00:09:45.194 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.194 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.453 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:45.453 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:45.453 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.453 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.453 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.453 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.453 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:45.453 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.453 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.453 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.453 [2024-11-02 23:49:39.472958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.454 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.454 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.454 "name": "Existed_Raid", 00:09:45.454 "aliases": [ 00:09:45.454 "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c" 00:09:45.454 ], 00:09:45.454 "product_name": "Raid Volume", 00:09:45.454 "block_size": 512, 00:09:45.454 "num_blocks": 253952, 00:09:45.454 "uuid": "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c", 00:09:45.454 "assigned_rate_limits": { 00:09:45.454 "rw_ios_per_sec": 0, 00:09:45.454 "rw_mbytes_per_sec": 0, 00:09:45.454 "r_mbytes_per_sec": 0, 00:09:45.454 "w_mbytes_per_sec": 0 00:09:45.454 }, 00:09:45.454 "claimed": false, 00:09:45.454 "zoned": false, 00:09:45.454 "supported_io_types": { 00:09:45.454 "read": true, 00:09:45.454 "write": true, 00:09:45.454 "unmap": true, 00:09:45.454 "flush": true, 00:09:45.454 "reset": true, 00:09:45.454 "nvme_admin": false, 00:09:45.454 "nvme_io": false, 00:09:45.454 "nvme_io_md": false, 00:09:45.454 "write_zeroes": true, 00:09:45.454 "zcopy": false, 00:09:45.454 "get_zone_info": false, 00:09:45.454 "zone_management": false, 00:09:45.454 "zone_append": false, 00:09:45.454 "compare": false, 00:09:45.454 "compare_and_write": false, 00:09:45.454 "abort": false, 00:09:45.454 "seek_hole": false, 00:09:45.454 "seek_data": false, 00:09:45.454 "copy": false, 00:09:45.454 "nvme_iov_md": false 00:09:45.454 }, 00:09:45.454 "memory_domains": [ 00:09:45.454 { 00:09:45.454 "dma_device_id": "system", 00:09:45.454 "dma_device_type": 1 00:09:45.454 }, 00:09:45.454 { 00:09:45.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.454 "dma_device_type": 2 00:09:45.454 }, 00:09:45.454 { 00:09:45.454 "dma_device_id": "system", 00:09:45.454 "dma_device_type": 1 00:09:45.454 }, 00:09:45.454 { 00:09:45.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.454 "dma_device_type": 2 00:09:45.454 }, 00:09:45.454 { 00:09:45.454 "dma_device_id": "system", 00:09:45.454 "dma_device_type": 1 00:09:45.454 }, 00:09:45.454 { 00:09:45.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.454 "dma_device_type": 2 00:09:45.454 }, 00:09:45.454 { 00:09:45.454 "dma_device_id": "system", 00:09:45.454 "dma_device_type": 1 00:09:45.454 }, 00:09:45.454 { 00:09:45.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.454 "dma_device_type": 2 00:09:45.454 } 00:09:45.454 ], 00:09:45.454 "driver_specific": { 00:09:45.454 "raid": { 00:09:45.454 "uuid": "ea6c26dc-8c4d-46c1-b4ac-e9d4958cc68c", 00:09:45.454 "strip_size_kb": 64, 00:09:45.454 "state": "online", 00:09:45.454 "raid_level": "concat", 00:09:45.454 "superblock": true, 00:09:45.454 "num_base_bdevs": 4, 00:09:45.454 "num_base_bdevs_discovered": 4, 00:09:45.454 "num_base_bdevs_operational": 4, 00:09:45.454 "base_bdevs_list": [ 00:09:45.454 { 00:09:45.454 "name": "NewBaseBdev", 00:09:45.454 "uuid": "20c6a118-2b74-4783-a757-7d71d66bba49", 00:09:45.454 "is_configured": true, 00:09:45.454 "data_offset": 2048, 00:09:45.454 "data_size": 63488 00:09:45.454 }, 00:09:45.454 { 00:09:45.454 "name": "BaseBdev2", 00:09:45.454 "uuid": "37343625-75b0-4be7-b2dc-0aceb7e83e05", 00:09:45.454 "is_configured": true, 00:09:45.454 "data_offset": 2048, 00:09:45.454 "data_size": 63488 00:09:45.454 }, 00:09:45.454 { 00:09:45.454 "name": "BaseBdev3", 00:09:45.454 "uuid": "d352221d-7121-4783-972c-ed4404dc744b", 00:09:45.454 "is_configured": true, 00:09:45.454 "data_offset": 2048, 00:09:45.454 "data_size": 63488 00:09:45.454 }, 00:09:45.454 { 00:09:45.454 "name": "BaseBdev4", 00:09:45.454 "uuid": "3847bc5a-48a1-4fe4-b1f8-06d65949bc0f", 00:09:45.454 "is_configured": true, 00:09:45.454 "data_offset": 2048, 00:09:45.454 "data_size": 63488 00:09:45.454 } 00:09:45.454 ] 00:09:45.454 } 00:09:45.454 } 00:09:45.454 }' 00:09:45.454 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:45.712 BaseBdev2 00:09:45.712 BaseBdev3 00:09:45.712 BaseBdev4' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.712 [2024-11-02 23:49:39.784003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.712 [2024-11-02 23:49:39.784095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.712 [2024-11-02 23:49:39.784228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.712 [2024-11-02 23:49:39.784342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.712 [2024-11-02 23:49:39.784387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82639 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82639 ']' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 82639 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:45.712 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82639 00:09:45.971 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:45.971 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:45.971 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82639' 00:09:45.971 killing process with pid 82639 00:09:45.971 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 82639 00:09:45.971 [2024-11-02 23:49:39.827735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.971 23:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 82639 00:09:45.971 [2024-11-02 23:49:39.870212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.230 23:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.230 00:09:46.230 real 0m9.827s 00:09:46.230 user 0m16.895s 00:09:46.230 sys 0m2.048s 00:09:46.230 23:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:46.230 23:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.230 ************************************ 00:09:46.230 END TEST raid_state_function_test_sb 00:09:46.230 ************************************ 00:09:46.230 23:49:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:46.230 23:49:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:46.230 23:49:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.231 23:49:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.231 ************************************ 00:09:46.231 START TEST raid_superblock_test 00:09:46.231 ************************************ 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83293 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83293 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 83293 ']' 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:46.231 23:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.231 [2024-11-02 23:49:40.249249] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:46.231 [2024-11-02 23:49:40.249486] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83293 ] 00:09:46.490 [2024-11-02 23:49:40.388312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.490 [2024-11-02 23:49:40.416859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.490 [2024-11-02 23:49:40.461716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.490 [2024-11-02 23:49:40.461857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.058 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.332 malloc1 00:09:47.332 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.333 [2024-11-02 23:49:41.173437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.333 [2024-11-02 23:49:41.173762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.333 [2024-11-02 23:49:41.173878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:47.333 [2024-11-02 23:49:41.173967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.333 [2024-11-02 23:49:41.176583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.333 [2024-11-02 23:49:41.176707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.333 pt1 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.333 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.334 malloc2 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.334 [2024-11-02 23:49:41.202805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.334 [2024-11-02 23:49:41.203088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.334 [2024-11-02 23:49:41.203192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:47.334 [2024-11-02 23:49:41.203291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.334 [2024-11-02 23:49:41.205848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.334 [2024-11-02 23:49:41.206013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.334 pt2 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.334 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.335 malloc3 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.335 [2024-11-02 23:49:41.235940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.335 [2024-11-02 23:49:41.236175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.335 [2024-11-02 23:49:41.236283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:47.335 [2024-11-02 23:49:41.236401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.335 [2024-11-02 23:49:41.238923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.335 [2024-11-02 23:49:41.239002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.335 pt3 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.335 malloc4 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.335 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.336 [2024-11-02 23:49:41.278837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:47.336 [2024-11-02 23:49:41.279241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.336 [2024-11-02 23:49:41.279330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:47.336 [2024-11-02 23:49:41.279425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.336 [2024-11-02 23:49:41.282009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.336 [2024-11-02 23:49:41.282110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:47.336 pt4 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.336 [2024-11-02 23:49:41.290955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.336 [2024-11-02 23:49:41.293037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.336 [2024-11-02 23:49:41.293111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.336 [2024-11-02 23:49:41.293160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:47.336 [2024-11-02 23:49:41.293331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:47.336 [2024-11-02 23:49:41.293348] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:47.336 [2024-11-02 23:49:41.293655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:47.336 [2024-11-02 23:49:41.293818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:47.336 [2024-11-02 23:49:41.293831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:47.336 [2024-11-02 23:49:41.293990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.336 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.337 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.337 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.337 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.337 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.337 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.337 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.337 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.337 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.337 "name": "raid_bdev1", 00:09:47.337 "uuid": "4665d886-da31-44e4-88dd-d5d43c888bc4", 00:09:47.337 "strip_size_kb": 64, 00:09:47.337 "state": "online", 00:09:47.337 "raid_level": "concat", 00:09:47.337 "superblock": true, 00:09:47.337 "num_base_bdevs": 4, 00:09:47.337 "num_base_bdevs_discovered": 4, 00:09:47.337 "num_base_bdevs_operational": 4, 00:09:47.337 "base_bdevs_list": [ 00:09:47.337 { 00:09:47.337 "name": "pt1", 00:09:47.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.337 "is_configured": true, 00:09:47.337 "data_offset": 2048, 00:09:47.337 "data_size": 63488 00:09:47.337 }, 00:09:47.337 { 00:09:47.337 "name": "pt2", 00:09:47.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.337 "is_configured": true, 00:09:47.337 "data_offset": 2048, 00:09:47.337 "data_size": 63488 00:09:47.337 }, 00:09:47.337 { 00:09:47.337 "name": "pt3", 00:09:47.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.337 "is_configured": true, 00:09:47.337 "data_offset": 2048, 00:09:47.337 "data_size": 63488 00:09:47.337 }, 00:09:47.337 { 00:09:47.337 "name": "pt4", 00:09:47.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:47.337 "is_configured": true, 00:09:47.337 "data_offset": 2048, 00:09:47.337 "data_size": 63488 00:09:47.337 } 00:09:47.337 ] 00:09:47.337 }' 00:09:47.337 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.337 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.906 [2024-11-02 23:49:41.738678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.906 "name": "raid_bdev1", 00:09:47.906 "aliases": [ 00:09:47.906 "4665d886-da31-44e4-88dd-d5d43c888bc4" 00:09:47.906 ], 00:09:47.906 "product_name": "Raid Volume", 00:09:47.906 "block_size": 512, 00:09:47.906 "num_blocks": 253952, 00:09:47.906 "uuid": "4665d886-da31-44e4-88dd-d5d43c888bc4", 00:09:47.906 "assigned_rate_limits": { 00:09:47.906 "rw_ios_per_sec": 0, 00:09:47.906 "rw_mbytes_per_sec": 0, 00:09:47.906 "r_mbytes_per_sec": 0, 00:09:47.906 "w_mbytes_per_sec": 0 00:09:47.906 }, 00:09:47.906 "claimed": false, 00:09:47.906 "zoned": false, 00:09:47.906 "supported_io_types": { 00:09:47.906 "read": true, 00:09:47.906 "write": true, 00:09:47.906 "unmap": true, 00:09:47.906 "flush": true, 00:09:47.906 "reset": true, 00:09:47.906 "nvme_admin": false, 00:09:47.906 "nvme_io": false, 00:09:47.906 "nvme_io_md": false, 00:09:47.906 "write_zeroes": true, 00:09:47.906 "zcopy": false, 00:09:47.906 "get_zone_info": false, 00:09:47.906 "zone_management": false, 00:09:47.906 "zone_append": false, 00:09:47.906 "compare": false, 00:09:47.906 "compare_and_write": false, 00:09:47.906 "abort": false, 00:09:47.906 "seek_hole": false, 00:09:47.906 "seek_data": false, 00:09:47.906 "copy": false, 00:09:47.906 "nvme_iov_md": false 00:09:47.906 }, 00:09:47.906 "memory_domains": [ 00:09:47.906 { 00:09:47.906 "dma_device_id": "system", 00:09:47.906 "dma_device_type": 1 00:09:47.906 }, 00:09:47.906 { 00:09:47.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.906 "dma_device_type": 2 00:09:47.906 }, 00:09:47.906 { 00:09:47.906 "dma_device_id": "system", 00:09:47.906 "dma_device_type": 1 00:09:47.906 }, 00:09:47.906 { 00:09:47.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.906 "dma_device_type": 2 00:09:47.906 }, 00:09:47.906 { 00:09:47.906 "dma_device_id": "system", 00:09:47.906 "dma_device_type": 1 00:09:47.906 }, 00:09:47.906 { 00:09:47.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.906 "dma_device_type": 2 00:09:47.906 }, 00:09:47.906 { 00:09:47.906 "dma_device_id": "system", 00:09:47.906 "dma_device_type": 1 00:09:47.906 }, 00:09:47.906 { 00:09:47.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.906 "dma_device_type": 2 00:09:47.906 } 00:09:47.906 ], 00:09:47.906 "driver_specific": { 00:09:47.906 "raid": { 00:09:47.906 "uuid": "4665d886-da31-44e4-88dd-d5d43c888bc4", 00:09:47.906 "strip_size_kb": 64, 00:09:47.906 "state": "online", 00:09:47.906 "raid_level": "concat", 00:09:47.906 "superblock": true, 00:09:47.906 "num_base_bdevs": 4, 00:09:47.906 "num_base_bdevs_discovered": 4, 00:09:47.906 "num_base_bdevs_operational": 4, 00:09:47.906 "base_bdevs_list": [ 00:09:47.906 { 00:09:47.906 "name": "pt1", 00:09:47.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.906 "is_configured": true, 00:09:47.906 "data_offset": 2048, 00:09:47.906 "data_size": 63488 00:09:47.906 }, 00:09:47.906 { 00:09:47.906 "name": "pt2", 00:09:47.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.906 "is_configured": true, 00:09:47.906 "data_offset": 2048, 00:09:47.906 "data_size": 63488 00:09:47.906 }, 00:09:47.906 { 00:09:47.906 "name": "pt3", 00:09:47.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.906 "is_configured": true, 00:09:47.906 "data_offset": 2048, 00:09:47.906 "data_size": 63488 00:09:47.906 }, 00:09:47.906 { 00:09:47.906 "name": "pt4", 00:09:47.906 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:47.906 "is_configured": true, 00:09:47.906 "data_offset": 2048, 00:09:47.906 "data_size": 63488 00:09:47.906 } 00:09:47.906 ] 00:09:47.906 } 00:09:47.906 } 00:09:47.906 }' 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.906 pt2 00:09:47.906 pt3 00:09:47.906 pt4' 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.906 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.907 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.907 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:47.907 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.907 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.907 23:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.907 23:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:48.167 [2024-11-02 23:49:42.074099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4665d886-da31-44e4-88dd-d5d43c888bc4 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4665d886-da31-44e4-88dd-d5d43c888bc4 ']' 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 [2024-11-02 23:49:42.121661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.167 [2024-11-02 23:49:42.121701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.167 [2024-11-02 23:49:42.121812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.167 [2024-11-02 23:49:42.121894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.167 [2024-11-02 23:49:42.121915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.167 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.427 [2024-11-02 23:49:42.289397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:48.427 [2024-11-02 23:49:42.291538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:48.427 [2024-11-02 23:49:42.291653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:48.427 [2024-11-02 23:49:42.291708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:48.427 [2024-11-02 23:49:42.291816] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:48.427 [2024-11-02 23:49:42.292259] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:48.427 [2024-11-02 23:49:42.292358] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:48.427 [2024-11-02 23:49:42.292418] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:48.427 [2024-11-02 23:49:42.292469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.427 [2024-11-02 23:49:42.292637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:48.427 request: 00:09:48.427 { 00:09:48.427 "name": "raid_bdev1", 00:09:48.427 "raid_level": "concat", 00:09:48.427 "base_bdevs": [ 00:09:48.427 "malloc1", 00:09:48.427 "malloc2", 00:09:48.427 "malloc3", 00:09:48.427 "malloc4" 00:09:48.427 ], 00:09:48.427 "strip_size_kb": 64, 00:09:48.427 "superblock": false, 00:09:48.427 "method": "bdev_raid_create", 00:09:48.427 "req_id": 1 00:09:48.427 } 00:09:48.427 Got JSON-RPC error response 00:09:48.427 response: 00:09:48.427 { 00:09:48.427 "code": -17, 00:09:48.427 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:48.427 } 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.427 [2024-11-02 23:49:42.357283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.427 [2024-11-02 23:49:42.357388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.427 [2024-11-02 23:49:42.357433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:48.427 [2024-11-02 23:49:42.357646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.427 [2024-11-02 23:49:42.360295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.427 [2024-11-02 23:49:42.360423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.427 [2024-11-02 23:49:42.360585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.427 [2024-11-02 23:49:42.360688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.427 pt1 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.427 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.428 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.428 "name": "raid_bdev1", 00:09:48.428 "uuid": "4665d886-da31-44e4-88dd-d5d43c888bc4", 00:09:48.428 "strip_size_kb": 64, 00:09:48.428 "state": "configuring", 00:09:48.428 "raid_level": "concat", 00:09:48.428 "superblock": true, 00:09:48.428 "num_base_bdevs": 4, 00:09:48.428 "num_base_bdevs_discovered": 1, 00:09:48.428 "num_base_bdevs_operational": 4, 00:09:48.428 "base_bdevs_list": [ 00:09:48.428 { 00:09:48.428 "name": "pt1", 00:09:48.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.428 "is_configured": true, 00:09:48.428 "data_offset": 2048, 00:09:48.428 "data_size": 63488 00:09:48.428 }, 00:09:48.428 { 00:09:48.428 "name": null, 00:09:48.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.428 "is_configured": false, 00:09:48.428 "data_offset": 2048, 00:09:48.428 "data_size": 63488 00:09:48.428 }, 00:09:48.428 { 00:09:48.428 "name": null, 00:09:48.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.428 "is_configured": false, 00:09:48.428 "data_offset": 2048, 00:09:48.428 "data_size": 63488 00:09:48.428 }, 00:09:48.428 { 00:09:48.428 "name": null, 00:09:48.428 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:48.428 "is_configured": false, 00:09:48.428 "data_offset": 2048, 00:09:48.428 "data_size": 63488 00:09:48.428 } 00:09:48.428 ] 00:09:48.428 }' 00:09:48.428 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.428 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.996 [2024-11-02 23:49:42.796691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.996 [2024-11-02 23:49:42.797015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.996 [2024-11-02 23:49:42.797102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:48.996 [2024-11-02 23:49:42.797173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.996 [2024-11-02 23:49:42.797755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.996 [2024-11-02 23:49:42.797890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.996 [2024-11-02 23:49:42.798099] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:48.996 [2024-11-02 23:49:42.798134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.996 pt2 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.996 [2024-11-02 23:49:42.808663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.996 "name": "raid_bdev1", 00:09:48.996 "uuid": "4665d886-da31-44e4-88dd-d5d43c888bc4", 00:09:48.996 "strip_size_kb": 64, 00:09:48.996 "state": "configuring", 00:09:48.996 "raid_level": "concat", 00:09:48.996 "superblock": true, 00:09:48.996 "num_base_bdevs": 4, 00:09:48.996 "num_base_bdevs_discovered": 1, 00:09:48.996 "num_base_bdevs_operational": 4, 00:09:48.996 "base_bdevs_list": [ 00:09:48.996 { 00:09:48.996 "name": "pt1", 00:09:48.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.996 "is_configured": true, 00:09:48.996 "data_offset": 2048, 00:09:48.996 "data_size": 63488 00:09:48.996 }, 00:09:48.996 { 00:09:48.996 "name": null, 00:09:48.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.996 "is_configured": false, 00:09:48.996 "data_offset": 0, 00:09:48.996 "data_size": 63488 00:09:48.996 }, 00:09:48.996 { 00:09:48.996 "name": null, 00:09:48.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.996 "is_configured": false, 00:09:48.996 "data_offset": 2048, 00:09:48.996 "data_size": 63488 00:09:48.996 }, 00:09:48.996 { 00:09:48.996 "name": null, 00:09:48.996 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:48.996 "is_configured": false, 00:09:48.996 "data_offset": 2048, 00:09:48.996 "data_size": 63488 00:09:48.996 } 00:09:48.996 ] 00:09:48.996 }' 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.996 23:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.257 [2024-11-02 23:49:43.247933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.257 [2024-11-02 23:49:43.248258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.257 [2024-11-02 23:49:43.248402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:49.257 [2024-11-02 23:49:43.248503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.257 [2024-11-02 23:49:43.249081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.257 [2024-11-02 23:49:43.249225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.257 [2024-11-02 23:49:43.249401] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.257 [2024-11-02 23:49:43.249473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.257 pt2 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.257 [2024-11-02 23:49:43.259854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:49.257 [2024-11-02 23:49:43.260041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.257 [2024-11-02 23:49:43.260167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:49.257 [2024-11-02 23:49:43.260256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.257 [2024-11-02 23:49:43.260766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.257 [2024-11-02 23:49:43.260902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:49.257 [2024-11-02 23:49:43.261056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:49.257 [2024-11-02 23:49:43.261124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:49.257 pt3 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.257 [2024-11-02 23:49:43.271856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:49.257 [2024-11-02 23:49:43.272029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.257 [2024-11-02 23:49:43.272100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:49.257 [2024-11-02 23:49:43.272153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.257 [2024-11-02 23:49:43.272573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.257 [2024-11-02 23:49:43.272675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:49.257 [2024-11-02 23:49:43.272832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:49.257 [2024-11-02 23:49:43.272868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:49.257 [2024-11-02 23:49:43.272983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:49.257 [2024-11-02 23:49:43.272996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:49.257 [2024-11-02 23:49:43.273253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:49.257 [2024-11-02 23:49:43.273380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:49.257 [2024-11-02 23:49:43.273392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:49.257 [2024-11-02 23:49:43.273498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.257 pt4 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.257 "name": "raid_bdev1", 00:09:49.257 "uuid": "4665d886-da31-44e4-88dd-d5d43c888bc4", 00:09:49.257 "strip_size_kb": 64, 00:09:49.257 "state": "online", 00:09:49.257 "raid_level": "concat", 00:09:49.257 "superblock": true, 00:09:49.257 "num_base_bdevs": 4, 00:09:49.257 "num_base_bdevs_discovered": 4, 00:09:49.257 "num_base_bdevs_operational": 4, 00:09:49.257 "base_bdevs_list": [ 00:09:49.257 { 00:09:49.257 "name": "pt1", 00:09:49.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.257 "is_configured": true, 00:09:49.257 "data_offset": 2048, 00:09:49.257 "data_size": 63488 00:09:49.257 }, 00:09:49.257 { 00:09:49.257 "name": "pt2", 00:09:49.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.257 "is_configured": true, 00:09:49.257 "data_offset": 2048, 00:09:49.257 "data_size": 63488 00:09:49.257 }, 00:09:49.257 { 00:09:49.257 "name": "pt3", 00:09:49.257 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.257 "is_configured": true, 00:09:49.257 "data_offset": 2048, 00:09:49.257 "data_size": 63488 00:09:49.257 }, 00:09:49.257 { 00:09:49.257 "name": "pt4", 00:09:49.257 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:49.257 "is_configured": true, 00:09:49.257 "data_offset": 2048, 00:09:49.257 "data_size": 63488 00:09:49.257 } 00:09:49.257 ] 00:09:49.257 }' 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.257 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.826 [2024-11-02 23:49:43.719474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.826 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.826 "name": "raid_bdev1", 00:09:49.826 "aliases": [ 00:09:49.826 "4665d886-da31-44e4-88dd-d5d43c888bc4" 00:09:49.826 ], 00:09:49.826 "product_name": "Raid Volume", 00:09:49.826 "block_size": 512, 00:09:49.826 "num_blocks": 253952, 00:09:49.826 "uuid": "4665d886-da31-44e4-88dd-d5d43c888bc4", 00:09:49.826 "assigned_rate_limits": { 00:09:49.826 "rw_ios_per_sec": 0, 00:09:49.826 "rw_mbytes_per_sec": 0, 00:09:49.826 "r_mbytes_per_sec": 0, 00:09:49.826 "w_mbytes_per_sec": 0 00:09:49.826 }, 00:09:49.826 "claimed": false, 00:09:49.826 "zoned": false, 00:09:49.826 "supported_io_types": { 00:09:49.826 "read": true, 00:09:49.826 "write": true, 00:09:49.826 "unmap": true, 00:09:49.826 "flush": true, 00:09:49.826 "reset": true, 00:09:49.826 "nvme_admin": false, 00:09:49.826 "nvme_io": false, 00:09:49.826 "nvme_io_md": false, 00:09:49.826 "write_zeroes": true, 00:09:49.826 "zcopy": false, 00:09:49.826 "get_zone_info": false, 00:09:49.826 "zone_management": false, 00:09:49.826 "zone_append": false, 00:09:49.826 "compare": false, 00:09:49.826 "compare_and_write": false, 00:09:49.826 "abort": false, 00:09:49.826 "seek_hole": false, 00:09:49.826 "seek_data": false, 00:09:49.826 "copy": false, 00:09:49.826 "nvme_iov_md": false 00:09:49.826 }, 00:09:49.826 "memory_domains": [ 00:09:49.826 { 00:09:49.826 "dma_device_id": "system", 00:09:49.826 "dma_device_type": 1 00:09:49.826 }, 00:09:49.826 { 00:09:49.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.826 "dma_device_type": 2 00:09:49.826 }, 00:09:49.826 { 00:09:49.826 "dma_device_id": "system", 00:09:49.826 "dma_device_type": 1 00:09:49.826 }, 00:09:49.826 { 00:09:49.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.826 "dma_device_type": 2 00:09:49.826 }, 00:09:49.826 { 00:09:49.826 "dma_device_id": "system", 00:09:49.826 "dma_device_type": 1 00:09:49.826 }, 00:09:49.826 { 00:09:49.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.826 "dma_device_type": 2 00:09:49.826 }, 00:09:49.826 { 00:09:49.826 "dma_device_id": "system", 00:09:49.826 "dma_device_type": 1 00:09:49.826 }, 00:09:49.826 { 00:09:49.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.826 "dma_device_type": 2 00:09:49.826 } 00:09:49.826 ], 00:09:49.826 "driver_specific": { 00:09:49.826 "raid": { 00:09:49.826 "uuid": "4665d886-da31-44e4-88dd-d5d43c888bc4", 00:09:49.826 "strip_size_kb": 64, 00:09:49.826 "state": "online", 00:09:49.826 "raid_level": "concat", 00:09:49.826 "superblock": true, 00:09:49.826 "num_base_bdevs": 4, 00:09:49.826 "num_base_bdevs_discovered": 4, 00:09:49.826 "num_base_bdevs_operational": 4, 00:09:49.826 "base_bdevs_list": [ 00:09:49.826 { 00:09:49.826 "name": "pt1", 00:09:49.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.826 "is_configured": true, 00:09:49.826 "data_offset": 2048, 00:09:49.826 "data_size": 63488 00:09:49.826 }, 00:09:49.826 { 00:09:49.826 "name": "pt2", 00:09:49.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.826 "is_configured": true, 00:09:49.826 "data_offset": 2048, 00:09:49.826 "data_size": 63488 00:09:49.826 }, 00:09:49.826 { 00:09:49.826 "name": "pt3", 00:09:49.826 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.826 "is_configured": true, 00:09:49.826 "data_offset": 2048, 00:09:49.826 "data_size": 63488 00:09:49.826 }, 00:09:49.826 { 00:09:49.826 "name": "pt4", 00:09:49.826 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:49.826 "is_configured": true, 00:09:49.826 "data_offset": 2048, 00:09:49.826 "data_size": 63488 00:09:49.826 } 00:09:49.826 ] 00:09:49.826 } 00:09:49.826 } 00:09:49.827 }' 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:49.827 pt2 00:09:49.827 pt3 00:09:49.827 pt4' 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.827 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.086 23:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.086 [2024-11-02 23:49:44.038941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4665d886-da31-44e4-88dd-d5d43c888bc4 '!=' 4665d886-da31-44e4-88dd-d5d43c888bc4 ']' 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83293 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 83293 ']' 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 83293 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83293 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83293' 00:09:50.086 killing process with pid 83293 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 83293 00:09:50.086 [2024-11-02 23:49:44.123827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.086 [2024-11-02 23:49:44.124016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.086 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 83293 00:09:50.086 [2024-11-02 23:49:44.124147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.086 [2024-11-02 23:49:44.124205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:50.086 [2024-11-02 23:49:44.169129] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.346 23:49:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:50.346 00:09:50.346 real 0m4.221s 00:09:50.346 user 0m6.670s 00:09:50.346 sys 0m0.965s 00:09:50.346 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:50.346 23:49:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.346 ************************************ 00:09:50.346 END TEST raid_superblock_test 00:09:50.346 ************************************ 00:09:50.346 23:49:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:09:50.346 23:49:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:50.606 23:49:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:50.606 23:49:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.606 ************************************ 00:09:50.606 START TEST raid_read_error_test 00:09:50.606 ************************************ 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NiwUb6bAWL 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83541 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83541 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 83541 ']' 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:50.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:50.606 23:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.606 [2024-11-02 23:49:44.558446] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:50.606 [2024-11-02 23:49:44.559015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83541 ] 00:09:50.866 [2024-11-02 23:49:44.711295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.866 [2024-11-02 23:49:44.738505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.866 [2024-11-02 23:49:44.783613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.866 [2024-11-02 23:49:44.783730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.434 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:51.434 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:51.434 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.434 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.435 BaseBdev1_malloc 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.435 true 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.435 [2024-11-02 23:49:45.467041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:51.435 [2024-11-02 23:49:45.467248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.435 [2024-11-02 23:49:45.467293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:51.435 [2024-11-02 23:49:45.467306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.435 [2024-11-02 23:49:45.470236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.435 [2024-11-02 23:49:45.470273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:51.435 BaseBdev1 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.435 BaseBdev2_malloc 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.435 true 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.435 [2024-11-02 23:49:45.508122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:51.435 [2024-11-02 23:49:45.508352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.435 [2024-11-02 23:49:45.508434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:51.435 [2024-11-02 23:49:45.508556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.435 [2024-11-02 23:49:45.511221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.435 [2024-11-02 23:49:45.511411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:51.435 BaseBdev2 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.435 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.694 BaseBdev3_malloc 00:09:51.694 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.694 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:51.694 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.694 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.694 true 00:09:51.694 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.694 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:51.694 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.694 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.694 [2024-11-02 23:49:45.549416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:51.694 [2024-11-02 23:49:45.549513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.694 [2024-11-02 23:49:45.549540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:51.694 [2024-11-02 23:49:45.549551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.694 [2024-11-02 23:49:45.552046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.694 [2024-11-02 23:49:45.552086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:51.694 BaseBdev3 00:09:51.694 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.694 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.695 BaseBdev4_malloc 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.695 true 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.695 [2024-11-02 23:49:45.600326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:51.695 [2024-11-02 23:49:45.600428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.695 [2024-11-02 23:49:45.600461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:51.695 [2024-11-02 23:49:45.600471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.695 [2024-11-02 23:49:45.602936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.695 [2024-11-02 23:49:45.602986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:51.695 BaseBdev4 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.695 [2024-11-02 23:49:45.612371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.695 [2024-11-02 23:49:45.614542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.695 [2024-11-02 23:49:45.614636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.695 [2024-11-02 23:49:45.614713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:51.695 [2024-11-02 23:49:45.614957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:51.695 [2024-11-02 23:49:45.614979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:51.695 [2024-11-02 23:49:45.615252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:51.695 [2024-11-02 23:49:45.615400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:51.695 [2024-11-02 23:49:45.615425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:51.695 [2024-11-02 23:49:45.615552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.695 "name": "raid_bdev1", 00:09:51.695 "uuid": "ce838f7d-5f79-48eb-ba4e-1ae924fd6b93", 00:09:51.695 "strip_size_kb": 64, 00:09:51.695 "state": "online", 00:09:51.695 "raid_level": "concat", 00:09:51.695 "superblock": true, 00:09:51.695 "num_base_bdevs": 4, 00:09:51.695 "num_base_bdevs_discovered": 4, 00:09:51.695 "num_base_bdevs_operational": 4, 00:09:51.695 "base_bdevs_list": [ 00:09:51.695 { 00:09:51.695 "name": "BaseBdev1", 00:09:51.695 "uuid": "14627a31-b77a-52cd-af32-ef57085f07da", 00:09:51.695 "is_configured": true, 00:09:51.695 "data_offset": 2048, 00:09:51.695 "data_size": 63488 00:09:51.695 }, 00:09:51.695 { 00:09:51.695 "name": "BaseBdev2", 00:09:51.695 "uuid": "da99d22e-c718-5030-91f1-92a3bdbe075e", 00:09:51.695 "is_configured": true, 00:09:51.695 "data_offset": 2048, 00:09:51.695 "data_size": 63488 00:09:51.695 }, 00:09:51.695 { 00:09:51.695 "name": "BaseBdev3", 00:09:51.695 "uuid": "5d69acab-9fc7-5215-baa0-977ac15710b6", 00:09:51.695 "is_configured": true, 00:09:51.695 "data_offset": 2048, 00:09:51.695 "data_size": 63488 00:09:51.695 }, 00:09:51.695 { 00:09:51.695 "name": "BaseBdev4", 00:09:51.695 "uuid": "dd742b47-b7ca-57d9-8995-f2d0f01616e1", 00:09:51.695 "is_configured": true, 00:09:51.695 "data_offset": 2048, 00:09:51.695 "data_size": 63488 00:09:51.695 } 00:09:51.695 ] 00:09:51.695 }' 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.695 23:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.261 23:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.261 23:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.261 [2024-11-02 23:49:46.163946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.202 "name": "raid_bdev1", 00:09:53.202 "uuid": "ce838f7d-5f79-48eb-ba4e-1ae924fd6b93", 00:09:53.202 "strip_size_kb": 64, 00:09:53.202 "state": "online", 00:09:53.202 "raid_level": "concat", 00:09:53.202 "superblock": true, 00:09:53.202 "num_base_bdevs": 4, 00:09:53.202 "num_base_bdevs_discovered": 4, 00:09:53.202 "num_base_bdevs_operational": 4, 00:09:53.202 "base_bdevs_list": [ 00:09:53.202 { 00:09:53.202 "name": "BaseBdev1", 00:09:53.202 "uuid": "14627a31-b77a-52cd-af32-ef57085f07da", 00:09:53.202 "is_configured": true, 00:09:53.202 "data_offset": 2048, 00:09:53.202 "data_size": 63488 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "name": "BaseBdev2", 00:09:53.202 "uuid": "da99d22e-c718-5030-91f1-92a3bdbe075e", 00:09:53.202 "is_configured": true, 00:09:53.202 "data_offset": 2048, 00:09:53.202 "data_size": 63488 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "name": "BaseBdev3", 00:09:53.202 "uuid": "5d69acab-9fc7-5215-baa0-977ac15710b6", 00:09:53.202 "is_configured": true, 00:09:53.202 "data_offset": 2048, 00:09:53.202 "data_size": 63488 00:09:53.202 }, 00:09:53.202 { 00:09:53.202 "name": "BaseBdev4", 00:09:53.202 "uuid": "dd742b47-b7ca-57d9-8995-f2d0f01616e1", 00:09:53.202 "is_configured": true, 00:09:53.202 "data_offset": 2048, 00:09:53.202 "data_size": 63488 00:09:53.202 } 00:09:53.202 ] 00:09:53.202 }' 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.202 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.461 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:53.461 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.461 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.461 [2024-11-02 23:49:47.503954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.461 [2024-11-02 23:49:47.504049] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.461 [2024-11-02 23:49:47.506708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.461 [2024-11-02 23:49:47.506813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.461 [2024-11-02 23:49:47.506883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.461 [2024-11-02 23:49:47.506926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:53.461 { 00:09:53.461 "results": [ 00:09:53.461 { 00:09:53.461 "job": "raid_bdev1", 00:09:53.461 "core_mask": "0x1", 00:09:53.461 "workload": "randrw", 00:09:53.461 "percentage": 50, 00:09:53.461 "status": "finished", 00:09:53.461 "queue_depth": 1, 00:09:53.461 "io_size": 131072, 00:09:53.461 "runtime": 1.340419, 00:09:53.461 "iops": 15805.505591908202, 00:09:53.461 "mibps": 1975.6881989885253, 00:09:53.461 "io_failed": 1, 00:09:53.461 "io_timeout": 0, 00:09:53.461 "avg_latency_us": 87.621913660082, 00:09:53.461 "min_latency_us": 25.041048034934498, 00:09:53.461 "max_latency_us": 1438.071615720524 00:09:53.461 } 00:09:53.461 ], 00:09:53.461 "core_count": 1 00:09:53.461 } 00:09:53.461 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.461 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83541 00:09:53.461 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 83541 ']' 00:09:53.461 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 83541 00:09:53.461 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:53.462 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:53.462 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83541 00:09:53.462 killing process with pid 83541 00:09:53.462 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:53.462 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:53.462 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83541' 00:09:53.462 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 83541 00:09:53.462 [2024-11-02 23:49:47.554146] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.462 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 83541 00:09:53.721 [2024-11-02 23:49:47.589465] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.721 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NiwUb6bAWL 00:09:53.721 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:53.721 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:53.721 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:53.721 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:53.721 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.721 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.721 23:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:53.721 00:09:53.721 real 0m3.346s 00:09:53.721 user 0m4.239s 00:09:53.721 sys 0m0.554s 00:09:53.721 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.721 23:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.721 ************************************ 00:09:53.721 END TEST raid_read_error_test 00:09:53.721 ************************************ 00:09:53.981 23:49:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:09:53.981 23:49:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:53.981 23:49:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.981 23:49:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.981 ************************************ 00:09:53.981 START TEST raid_write_error_test 00:09:53.981 ************************************ 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qfxyeqGieo 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83671 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83671 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 83671 ']' 00:09:53.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:53.981 23:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.981 [2024-11-02 23:49:47.987287] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:53.981 [2024-11-02 23:49:47.987509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83671 ] 00:09:54.241 [2024-11-02 23:49:48.123986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.241 [2024-11-02 23:49:48.152884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.241 [2024-11-02 23:49:48.194045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.241 [2024-11-02 23:49:48.194161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.818 BaseBdev1_malloc 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.818 true 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.818 [2024-11-02 23:49:48.847773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:54.818 [2024-11-02 23:49:48.847823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.818 [2024-11-02 23:49:48.847843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:54.818 [2024-11-02 23:49:48.847852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.818 [2024-11-02 23:49:48.850020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.818 [2024-11-02 23:49:48.850139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:54.818 BaseBdev1 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.818 BaseBdev2_malloc 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.818 true 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.818 [2024-11-02 23:49:48.888357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:54.818 [2024-11-02 23:49:48.888406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.818 [2024-11-02 23:49:48.888424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:54.818 [2024-11-02 23:49:48.888441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.818 [2024-11-02 23:49:48.890613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.818 [2024-11-02 23:49:48.890649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:54.818 BaseBdev2 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.818 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.078 BaseBdev3_malloc 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.078 true 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.078 [2024-11-02 23:49:48.928957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:55.078 [2024-11-02 23:49:48.929003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.078 [2024-11-02 23:49:48.929022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:55.078 [2024-11-02 23:49:48.929031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.078 [2024-11-02 23:49:48.931199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.078 [2024-11-02 23:49:48.931250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:55.078 BaseBdev3 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.078 BaseBdev4_malloc 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.078 true 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.078 [2024-11-02 23:49:48.979031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:55.078 [2024-11-02 23:49:48.979126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.078 [2024-11-02 23:49:48.979156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:55.078 [2024-11-02 23:49:48.979164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.078 [2024-11-02 23:49:48.981266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.078 [2024-11-02 23:49:48.981301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:55.078 BaseBdev4 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.078 [2024-11-02 23:49:48.991065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.078 [2024-11-02 23:49:48.992916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.078 [2024-11-02 23:49:48.992992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.078 [2024-11-02 23:49:48.993056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:55.078 [2024-11-02 23:49:48.993253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:55.078 [2024-11-02 23:49:48.993271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:55.078 [2024-11-02 23:49:48.993535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:55.078 [2024-11-02 23:49:48.993652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:55.078 [2024-11-02 23:49:48.993664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:55.078 [2024-11-02 23:49:48.993784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.078 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.079 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.079 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.079 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.079 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.079 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.079 23:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.079 23:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.079 23:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.079 23:49:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.079 23:49:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.079 23:49:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.079 23:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.079 "name": "raid_bdev1", 00:09:55.079 "uuid": "2bd1ba7f-6eee-4ca8-9d1f-6bc9718a94bc", 00:09:55.079 "strip_size_kb": 64, 00:09:55.079 "state": "online", 00:09:55.079 "raid_level": "concat", 00:09:55.079 "superblock": true, 00:09:55.079 "num_base_bdevs": 4, 00:09:55.079 "num_base_bdevs_discovered": 4, 00:09:55.079 "num_base_bdevs_operational": 4, 00:09:55.079 "base_bdevs_list": [ 00:09:55.079 { 00:09:55.079 "name": "BaseBdev1", 00:09:55.079 "uuid": "0520e205-0749-5d7f-88a5-d53e3c32ebc5", 00:09:55.079 "is_configured": true, 00:09:55.079 "data_offset": 2048, 00:09:55.079 "data_size": 63488 00:09:55.079 }, 00:09:55.079 { 00:09:55.079 "name": "BaseBdev2", 00:09:55.079 "uuid": "1a44b49f-08f1-5a9b-b50d-c9abc242d559", 00:09:55.079 "is_configured": true, 00:09:55.079 "data_offset": 2048, 00:09:55.079 "data_size": 63488 00:09:55.079 }, 00:09:55.079 { 00:09:55.079 "name": "BaseBdev3", 00:09:55.079 "uuid": "453d0855-051d-5d39-836d-a1b72f57f582", 00:09:55.079 "is_configured": true, 00:09:55.079 "data_offset": 2048, 00:09:55.079 "data_size": 63488 00:09:55.079 }, 00:09:55.079 { 00:09:55.079 "name": "BaseBdev4", 00:09:55.079 "uuid": "0dac66c4-aafd-5c4b-8a44-37ac06306893", 00:09:55.079 "is_configured": true, 00:09:55.079 "data_offset": 2048, 00:09:55.079 "data_size": 63488 00:09:55.079 } 00:09:55.079 ] 00:09:55.079 }' 00:09:55.079 23:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.079 23:49:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.338 23:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:55.338 23:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:55.596 [2024-11-02 23:49:49.478635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.542 "name": "raid_bdev1", 00:09:56.542 "uuid": "2bd1ba7f-6eee-4ca8-9d1f-6bc9718a94bc", 00:09:56.542 "strip_size_kb": 64, 00:09:56.542 "state": "online", 00:09:56.542 "raid_level": "concat", 00:09:56.542 "superblock": true, 00:09:56.542 "num_base_bdevs": 4, 00:09:56.542 "num_base_bdevs_discovered": 4, 00:09:56.542 "num_base_bdevs_operational": 4, 00:09:56.542 "base_bdevs_list": [ 00:09:56.542 { 00:09:56.542 "name": "BaseBdev1", 00:09:56.542 "uuid": "0520e205-0749-5d7f-88a5-d53e3c32ebc5", 00:09:56.542 "is_configured": true, 00:09:56.542 "data_offset": 2048, 00:09:56.542 "data_size": 63488 00:09:56.542 }, 00:09:56.542 { 00:09:56.542 "name": "BaseBdev2", 00:09:56.542 "uuid": "1a44b49f-08f1-5a9b-b50d-c9abc242d559", 00:09:56.542 "is_configured": true, 00:09:56.542 "data_offset": 2048, 00:09:56.542 "data_size": 63488 00:09:56.542 }, 00:09:56.542 { 00:09:56.542 "name": "BaseBdev3", 00:09:56.542 "uuid": "453d0855-051d-5d39-836d-a1b72f57f582", 00:09:56.542 "is_configured": true, 00:09:56.542 "data_offset": 2048, 00:09:56.542 "data_size": 63488 00:09:56.542 }, 00:09:56.542 { 00:09:56.542 "name": "BaseBdev4", 00:09:56.542 "uuid": "0dac66c4-aafd-5c4b-8a44-37ac06306893", 00:09:56.542 "is_configured": true, 00:09:56.542 "data_offset": 2048, 00:09:56.542 "data_size": 63488 00:09:56.542 } 00:09:56.542 ] 00:09:56.542 }' 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.542 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 [2024-11-02 23:49:50.842938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.802 [2024-11-02 23:49:50.843039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.802 [2024-11-02 23:49:50.845903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.802 [2024-11-02 23:49:50.845980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.802 [2024-11-02 23:49:50.846026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.802 [2024-11-02 23:49:50.846035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:56.802 { 00:09:56.802 "results": [ 00:09:56.802 { 00:09:56.802 "job": "raid_bdev1", 00:09:56.802 "core_mask": "0x1", 00:09:56.802 "workload": "randrw", 00:09:56.802 "percentage": 50, 00:09:56.802 "status": "finished", 00:09:56.802 "queue_depth": 1, 00:09:56.802 "io_size": 131072, 00:09:56.802 "runtime": 1.36502, 00:09:56.802 "iops": 16045.186151118665, 00:09:56.802 "mibps": 2005.6482688898332, 00:09:56.802 "io_failed": 1, 00:09:56.802 "io_timeout": 0, 00:09:56.802 "avg_latency_us": 86.26340663987526, 00:09:56.802 "min_latency_us": 27.053275109170304, 00:09:56.802 "max_latency_us": 1595.4724890829693 00:09:56.802 } 00:09:56.802 ], 00:09:56.802 "core_count": 1 00:09:56.802 } 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83671 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 83671 ']' 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 83671 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83671 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:56.802 killing process with pid 83671 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83671' 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 83671 00:09:56.802 [2024-11-02 23:49:50.893376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.802 23:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 83671 00:09:57.061 [2024-11-02 23:49:50.929003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.061 23:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qfxyeqGieo 00:09:57.061 23:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.061 23:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.061 23:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:57.061 23:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:57.061 ************************************ 00:09:57.061 END TEST raid_write_error_test 00:09:57.061 ************************************ 00:09:57.061 23:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.061 23:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.061 23:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:57.061 00:09:57.061 real 0m3.270s 00:09:57.061 user 0m4.058s 00:09:57.061 sys 0m0.574s 00:09:57.061 23:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:57.061 23:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.321 23:49:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:57.321 23:49:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:09:57.321 23:49:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:57.321 23:49:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.321 23:49:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.321 ************************************ 00:09:57.321 START TEST raid_state_function_test 00:09:57.321 ************************************ 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:57.321 Process raid pid: 83798 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83798 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83798' 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83798 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83798 ']' 00:09:57.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.321 23:49:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.321 [2024-11-02 23:49:51.322951] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:09:57.321 [2024-11-02 23:49:51.323105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.581 [2024-11-02 23:49:51.480049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.581 [2024-11-02 23:49:51.506934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.581 [2024-11-02 23:49:51.549841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.581 [2024-11-02 23:49:51.549956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.150 [2024-11-02 23:49:52.167334] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.150 [2024-11-02 23:49:52.167396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.150 [2024-11-02 23:49:52.167410] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.150 [2024-11-02 23:49:52.167422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.150 [2024-11-02 23:49:52.167429] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.150 [2024-11-02 23:49:52.167440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.150 [2024-11-02 23:49:52.167447] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.150 [2024-11-02 23:49:52.167456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.150 "name": "Existed_Raid", 00:09:58.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.150 "strip_size_kb": 0, 00:09:58.150 "state": "configuring", 00:09:58.150 "raid_level": "raid1", 00:09:58.150 "superblock": false, 00:09:58.150 "num_base_bdevs": 4, 00:09:58.150 "num_base_bdevs_discovered": 0, 00:09:58.150 "num_base_bdevs_operational": 4, 00:09:58.150 "base_bdevs_list": [ 00:09:58.150 { 00:09:58.150 "name": "BaseBdev1", 00:09:58.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.150 "is_configured": false, 00:09:58.150 "data_offset": 0, 00:09:58.150 "data_size": 0 00:09:58.150 }, 00:09:58.150 { 00:09:58.150 "name": "BaseBdev2", 00:09:58.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.150 "is_configured": false, 00:09:58.150 "data_offset": 0, 00:09:58.150 "data_size": 0 00:09:58.150 }, 00:09:58.150 { 00:09:58.150 "name": "BaseBdev3", 00:09:58.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.150 "is_configured": false, 00:09:58.150 "data_offset": 0, 00:09:58.150 "data_size": 0 00:09:58.150 }, 00:09:58.150 { 00:09:58.150 "name": "BaseBdev4", 00:09:58.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.150 "is_configured": false, 00:09:58.150 "data_offset": 0, 00:09:58.150 "data_size": 0 00:09:58.150 } 00:09:58.150 ] 00:09:58.150 }' 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.150 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.720 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.720 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.720 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.720 [2024-11-02 23:49:52.554620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.720 [2024-11-02 23:49:52.554707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:58.720 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.720 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.720 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.720 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.720 [2024-11-02 23:49:52.566612] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.720 [2024-11-02 23:49:52.566691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.720 [2024-11-02 23:49:52.566718] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.720 [2024-11-02 23:49:52.566751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.721 [2024-11-02 23:49:52.566772] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.721 [2024-11-02 23:49:52.566793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.721 [2024-11-02 23:49:52.566811] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.721 [2024-11-02 23:49:52.566831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.721 [2024-11-02 23:49:52.587538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.721 BaseBdev1 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.721 [ 00:09:58.721 { 00:09:58.721 "name": "BaseBdev1", 00:09:58.721 "aliases": [ 00:09:58.721 "a0de8342-0d8c-4ea9-8625-74f62cfcbfc6" 00:09:58.721 ], 00:09:58.721 "product_name": "Malloc disk", 00:09:58.721 "block_size": 512, 00:09:58.721 "num_blocks": 65536, 00:09:58.721 "uuid": "a0de8342-0d8c-4ea9-8625-74f62cfcbfc6", 00:09:58.721 "assigned_rate_limits": { 00:09:58.721 "rw_ios_per_sec": 0, 00:09:58.721 "rw_mbytes_per_sec": 0, 00:09:58.721 "r_mbytes_per_sec": 0, 00:09:58.721 "w_mbytes_per_sec": 0 00:09:58.721 }, 00:09:58.721 "claimed": true, 00:09:58.721 "claim_type": "exclusive_write", 00:09:58.721 "zoned": false, 00:09:58.721 "supported_io_types": { 00:09:58.721 "read": true, 00:09:58.721 "write": true, 00:09:58.721 "unmap": true, 00:09:58.721 "flush": true, 00:09:58.721 "reset": true, 00:09:58.721 "nvme_admin": false, 00:09:58.721 "nvme_io": false, 00:09:58.721 "nvme_io_md": false, 00:09:58.721 "write_zeroes": true, 00:09:58.721 "zcopy": true, 00:09:58.721 "get_zone_info": false, 00:09:58.721 "zone_management": false, 00:09:58.721 "zone_append": false, 00:09:58.721 "compare": false, 00:09:58.721 "compare_and_write": false, 00:09:58.721 "abort": true, 00:09:58.721 "seek_hole": false, 00:09:58.721 "seek_data": false, 00:09:58.721 "copy": true, 00:09:58.721 "nvme_iov_md": false 00:09:58.721 }, 00:09:58.721 "memory_domains": [ 00:09:58.721 { 00:09:58.721 "dma_device_id": "system", 00:09:58.721 "dma_device_type": 1 00:09:58.721 }, 00:09:58.721 { 00:09:58.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.721 "dma_device_type": 2 00:09:58.721 } 00:09:58.721 ], 00:09:58.721 "driver_specific": {} 00:09:58.721 } 00:09:58.721 ] 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.721 "name": "Existed_Raid", 00:09:58.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.721 "strip_size_kb": 0, 00:09:58.721 "state": "configuring", 00:09:58.721 "raid_level": "raid1", 00:09:58.721 "superblock": false, 00:09:58.721 "num_base_bdevs": 4, 00:09:58.721 "num_base_bdevs_discovered": 1, 00:09:58.721 "num_base_bdevs_operational": 4, 00:09:58.721 "base_bdevs_list": [ 00:09:58.721 { 00:09:58.721 "name": "BaseBdev1", 00:09:58.721 "uuid": "a0de8342-0d8c-4ea9-8625-74f62cfcbfc6", 00:09:58.721 "is_configured": true, 00:09:58.721 "data_offset": 0, 00:09:58.721 "data_size": 65536 00:09:58.721 }, 00:09:58.721 { 00:09:58.721 "name": "BaseBdev2", 00:09:58.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.721 "is_configured": false, 00:09:58.721 "data_offset": 0, 00:09:58.721 "data_size": 0 00:09:58.721 }, 00:09:58.721 { 00:09:58.721 "name": "BaseBdev3", 00:09:58.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.721 "is_configured": false, 00:09:58.721 "data_offset": 0, 00:09:58.721 "data_size": 0 00:09:58.721 }, 00:09:58.721 { 00:09:58.721 "name": "BaseBdev4", 00:09:58.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.721 "is_configured": false, 00:09:58.721 "data_offset": 0, 00:09:58.721 "data_size": 0 00:09:58.721 } 00:09:58.721 ] 00:09:58.721 }' 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.721 23:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.980 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.980 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.980 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.980 [2024-11-02 23:49:53.054835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.980 [2024-11-02 23:49:53.054947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:58.980 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.980 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.980 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.980 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.981 [2024-11-02 23:49:53.066828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.981 [2024-11-02 23:49:53.068839] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.981 [2024-11-02 23:49:53.068929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.981 [2024-11-02 23:49:53.068958] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.981 [2024-11-02 23:49:53.068981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.981 [2024-11-02 23:49:53.068999] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.981 [2024-11-02 23:49:53.069019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.981 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.981 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.239 "name": "Existed_Raid", 00:09:59.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.239 "strip_size_kb": 0, 00:09:59.239 "state": "configuring", 00:09:59.239 "raid_level": "raid1", 00:09:59.239 "superblock": false, 00:09:59.239 "num_base_bdevs": 4, 00:09:59.239 "num_base_bdevs_discovered": 1, 00:09:59.239 "num_base_bdevs_operational": 4, 00:09:59.239 "base_bdevs_list": [ 00:09:59.239 { 00:09:59.239 "name": "BaseBdev1", 00:09:59.239 "uuid": "a0de8342-0d8c-4ea9-8625-74f62cfcbfc6", 00:09:59.239 "is_configured": true, 00:09:59.239 "data_offset": 0, 00:09:59.239 "data_size": 65536 00:09:59.239 }, 00:09:59.239 { 00:09:59.239 "name": "BaseBdev2", 00:09:59.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.239 "is_configured": false, 00:09:59.239 "data_offset": 0, 00:09:59.239 "data_size": 0 00:09:59.239 }, 00:09:59.239 { 00:09:59.239 "name": "BaseBdev3", 00:09:59.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.239 "is_configured": false, 00:09:59.239 "data_offset": 0, 00:09:59.239 "data_size": 0 00:09:59.239 }, 00:09:59.239 { 00:09:59.239 "name": "BaseBdev4", 00:09:59.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.239 "is_configured": false, 00:09:59.239 "data_offset": 0, 00:09:59.239 "data_size": 0 00:09:59.239 } 00:09:59.239 ] 00:09:59.239 }' 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.239 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 [2024-11-02 23:49:53.517203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.497 BaseBdev2 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 [ 00:09:59.497 { 00:09:59.497 "name": "BaseBdev2", 00:09:59.497 "aliases": [ 00:09:59.497 "5fc23aec-205b-4a4b-9c9b-5286e9994463" 00:09:59.497 ], 00:09:59.497 "product_name": "Malloc disk", 00:09:59.497 "block_size": 512, 00:09:59.497 "num_blocks": 65536, 00:09:59.497 "uuid": "5fc23aec-205b-4a4b-9c9b-5286e9994463", 00:09:59.497 "assigned_rate_limits": { 00:09:59.497 "rw_ios_per_sec": 0, 00:09:59.497 "rw_mbytes_per_sec": 0, 00:09:59.497 "r_mbytes_per_sec": 0, 00:09:59.497 "w_mbytes_per_sec": 0 00:09:59.497 }, 00:09:59.497 "claimed": true, 00:09:59.497 "claim_type": "exclusive_write", 00:09:59.497 "zoned": false, 00:09:59.497 "supported_io_types": { 00:09:59.497 "read": true, 00:09:59.497 "write": true, 00:09:59.497 "unmap": true, 00:09:59.497 "flush": true, 00:09:59.497 "reset": true, 00:09:59.497 "nvme_admin": false, 00:09:59.497 "nvme_io": false, 00:09:59.497 "nvme_io_md": false, 00:09:59.497 "write_zeroes": true, 00:09:59.497 "zcopy": true, 00:09:59.497 "get_zone_info": false, 00:09:59.497 "zone_management": false, 00:09:59.497 "zone_append": false, 00:09:59.497 "compare": false, 00:09:59.497 "compare_and_write": false, 00:09:59.497 "abort": true, 00:09:59.497 "seek_hole": false, 00:09:59.497 "seek_data": false, 00:09:59.497 "copy": true, 00:09:59.497 "nvme_iov_md": false 00:09:59.497 }, 00:09:59.497 "memory_domains": [ 00:09:59.497 { 00:09:59.497 "dma_device_id": "system", 00:09:59.497 "dma_device_type": 1 00:09:59.497 }, 00:09:59.497 { 00:09:59.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.497 "dma_device_type": 2 00:09:59.497 } 00:09:59.497 ], 00:09:59.497 "driver_specific": {} 00:09:59.497 } 00:09:59.497 ] 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.497 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.755 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.755 "name": "Existed_Raid", 00:09:59.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.755 "strip_size_kb": 0, 00:09:59.755 "state": "configuring", 00:09:59.755 "raid_level": "raid1", 00:09:59.755 "superblock": false, 00:09:59.755 "num_base_bdevs": 4, 00:09:59.755 "num_base_bdevs_discovered": 2, 00:09:59.755 "num_base_bdevs_operational": 4, 00:09:59.755 "base_bdevs_list": [ 00:09:59.755 { 00:09:59.755 "name": "BaseBdev1", 00:09:59.755 "uuid": "a0de8342-0d8c-4ea9-8625-74f62cfcbfc6", 00:09:59.755 "is_configured": true, 00:09:59.755 "data_offset": 0, 00:09:59.755 "data_size": 65536 00:09:59.755 }, 00:09:59.755 { 00:09:59.755 "name": "BaseBdev2", 00:09:59.755 "uuid": "5fc23aec-205b-4a4b-9c9b-5286e9994463", 00:09:59.755 "is_configured": true, 00:09:59.755 "data_offset": 0, 00:09:59.755 "data_size": 65536 00:09:59.755 }, 00:09:59.755 { 00:09:59.755 "name": "BaseBdev3", 00:09:59.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.755 "is_configured": false, 00:09:59.755 "data_offset": 0, 00:09:59.755 "data_size": 0 00:09:59.755 }, 00:09:59.755 { 00:09:59.755 "name": "BaseBdev4", 00:09:59.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.755 "is_configured": false, 00:09:59.755 "data_offset": 0, 00:09:59.755 "data_size": 0 00:09:59.755 } 00:09:59.755 ] 00:09:59.755 }' 00:09:59.755 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.755 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.014 [2024-11-02 23:49:53.993296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.014 BaseBdev3 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.014 23:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.014 [ 00:10:00.014 { 00:10:00.014 "name": "BaseBdev3", 00:10:00.014 "aliases": [ 00:10:00.014 "0ffaf685-b600-4bc3-bc58-86f766430199" 00:10:00.014 ], 00:10:00.014 "product_name": "Malloc disk", 00:10:00.014 "block_size": 512, 00:10:00.014 "num_blocks": 65536, 00:10:00.014 "uuid": "0ffaf685-b600-4bc3-bc58-86f766430199", 00:10:00.014 "assigned_rate_limits": { 00:10:00.014 "rw_ios_per_sec": 0, 00:10:00.014 "rw_mbytes_per_sec": 0, 00:10:00.014 "r_mbytes_per_sec": 0, 00:10:00.014 "w_mbytes_per_sec": 0 00:10:00.014 }, 00:10:00.014 "claimed": true, 00:10:00.014 "claim_type": "exclusive_write", 00:10:00.014 "zoned": false, 00:10:00.014 "supported_io_types": { 00:10:00.014 "read": true, 00:10:00.014 "write": true, 00:10:00.014 "unmap": true, 00:10:00.014 "flush": true, 00:10:00.014 "reset": true, 00:10:00.014 "nvme_admin": false, 00:10:00.014 "nvme_io": false, 00:10:00.014 "nvme_io_md": false, 00:10:00.014 "write_zeroes": true, 00:10:00.014 "zcopy": true, 00:10:00.014 "get_zone_info": false, 00:10:00.014 "zone_management": false, 00:10:00.014 "zone_append": false, 00:10:00.014 "compare": false, 00:10:00.014 "compare_and_write": false, 00:10:00.014 "abort": true, 00:10:00.014 "seek_hole": false, 00:10:00.014 "seek_data": false, 00:10:00.014 "copy": true, 00:10:00.014 "nvme_iov_md": false 00:10:00.014 }, 00:10:00.014 "memory_domains": [ 00:10:00.014 { 00:10:00.014 "dma_device_id": "system", 00:10:00.014 "dma_device_type": 1 00:10:00.014 }, 00:10:00.014 { 00:10:00.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.014 "dma_device_type": 2 00:10:00.014 } 00:10:00.014 ], 00:10:00.014 "driver_specific": {} 00:10:00.014 } 00:10:00.014 ] 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.014 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.015 "name": "Existed_Raid", 00:10:00.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.015 "strip_size_kb": 0, 00:10:00.015 "state": "configuring", 00:10:00.015 "raid_level": "raid1", 00:10:00.015 "superblock": false, 00:10:00.015 "num_base_bdevs": 4, 00:10:00.015 "num_base_bdevs_discovered": 3, 00:10:00.015 "num_base_bdevs_operational": 4, 00:10:00.015 "base_bdevs_list": [ 00:10:00.015 { 00:10:00.015 "name": "BaseBdev1", 00:10:00.015 "uuid": "a0de8342-0d8c-4ea9-8625-74f62cfcbfc6", 00:10:00.015 "is_configured": true, 00:10:00.015 "data_offset": 0, 00:10:00.015 "data_size": 65536 00:10:00.015 }, 00:10:00.015 { 00:10:00.015 "name": "BaseBdev2", 00:10:00.015 "uuid": "5fc23aec-205b-4a4b-9c9b-5286e9994463", 00:10:00.015 "is_configured": true, 00:10:00.015 "data_offset": 0, 00:10:00.015 "data_size": 65536 00:10:00.015 }, 00:10:00.015 { 00:10:00.015 "name": "BaseBdev3", 00:10:00.015 "uuid": "0ffaf685-b600-4bc3-bc58-86f766430199", 00:10:00.015 "is_configured": true, 00:10:00.015 "data_offset": 0, 00:10:00.015 "data_size": 65536 00:10:00.015 }, 00:10:00.015 { 00:10:00.015 "name": "BaseBdev4", 00:10:00.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.015 "is_configured": false, 00:10:00.015 "data_offset": 0, 00:10:00.015 "data_size": 0 00:10:00.015 } 00:10:00.015 ] 00:10:00.015 }' 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.015 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.583 [2024-11-02 23:49:54.475726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:00.583 [2024-11-02 23:49:54.475884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:00.583 [2024-11-02 23:49:54.475900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:00.583 [2024-11-02 23:49:54.476219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:00.583 [2024-11-02 23:49:54.476376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:00.583 [2024-11-02 23:49:54.476389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:00.583 [2024-11-02 23:49:54.476611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.583 BaseBdev4 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.583 [ 00:10:00.583 { 00:10:00.583 "name": "BaseBdev4", 00:10:00.583 "aliases": [ 00:10:00.583 "033a8e2d-f0bd-4429-b542-beb9f2cbc63b" 00:10:00.583 ], 00:10:00.583 "product_name": "Malloc disk", 00:10:00.583 "block_size": 512, 00:10:00.583 "num_blocks": 65536, 00:10:00.583 "uuid": "033a8e2d-f0bd-4429-b542-beb9f2cbc63b", 00:10:00.583 "assigned_rate_limits": { 00:10:00.583 "rw_ios_per_sec": 0, 00:10:00.583 "rw_mbytes_per_sec": 0, 00:10:00.583 "r_mbytes_per_sec": 0, 00:10:00.583 "w_mbytes_per_sec": 0 00:10:00.583 }, 00:10:00.583 "claimed": true, 00:10:00.583 "claim_type": "exclusive_write", 00:10:00.583 "zoned": false, 00:10:00.583 "supported_io_types": { 00:10:00.583 "read": true, 00:10:00.583 "write": true, 00:10:00.583 "unmap": true, 00:10:00.583 "flush": true, 00:10:00.583 "reset": true, 00:10:00.583 "nvme_admin": false, 00:10:00.583 "nvme_io": false, 00:10:00.583 "nvme_io_md": false, 00:10:00.583 "write_zeroes": true, 00:10:00.583 "zcopy": true, 00:10:00.583 "get_zone_info": false, 00:10:00.583 "zone_management": false, 00:10:00.583 "zone_append": false, 00:10:00.583 "compare": false, 00:10:00.583 "compare_and_write": false, 00:10:00.583 "abort": true, 00:10:00.583 "seek_hole": false, 00:10:00.583 "seek_data": false, 00:10:00.583 "copy": true, 00:10:00.583 "nvme_iov_md": false 00:10:00.583 }, 00:10:00.583 "memory_domains": [ 00:10:00.583 { 00:10:00.583 "dma_device_id": "system", 00:10:00.583 "dma_device_type": 1 00:10:00.583 }, 00:10:00.583 { 00:10:00.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.583 "dma_device_type": 2 00:10:00.583 } 00:10:00.583 ], 00:10:00.583 "driver_specific": {} 00:10:00.583 } 00:10:00.583 ] 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.583 "name": "Existed_Raid", 00:10:00.583 "uuid": "41db65d3-38e2-4741-b004-55e2144f9611", 00:10:00.583 "strip_size_kb": 0, 00:10:00.583 "state": "online", 00:10:00.583 "raid_level": "raid1", 00:10:00.583 "superblock": false, 00:10:00.583 "num_base_bdevs": 4, 00:10:00.583 "num_base_bdevs_discovered": 4, 00:10:00.583 "num_base_bdevs_operational": 4, 00:10:00.583 "base_bdevs_list": [ 00:10:00.583 { 00:10:00.583 "name": "BaseBdev1", 00:10:00.583 "uuid": "a0de8342-0d8c-4ea9-8625-74f62cfcbfc6", 00:10:00.583 "is_configured": true, 00:10:00.583 "data_offset": 0, 00:10:00.583 "data_size": 65536 00:10:00.583 }, 00:10:00.583 { 00:10:00.583 "name": "BaseBdev2", 00:10:00.583 "uuid": "5fc23aec-205b-4a4b-9c9b-5286e9994463", 00:10:00.583 "is_configured": true, 00:10:00.583 "data_offset": 0, 00:10:00.583 "data_size": 65536 00:10:00.583 }, 00:10:00.583 { 00:10:00.583 "name": "BaseBdev3", 00:10:00.583 "uuid": "0ffaf685-b600-4bc3-bc58-86f766430199", 00:10:00.583 "is_configured": true, 00:10:00.583 "data_offset": 0, 00:10:00.583 "data_size": 65536 00:10:00.583 }, 00:10:00.583 { 00:10:00.583 "name": "BaseBdev4", 00:10:00.583 "uuid": "033a8e2d-f0bd-4429-b542-beb9f2cbc63b", 00:10:00.583 "is_configured": true, 00:10:00.583 "data_offset": 0, 00:10:00.583 "data_size": 65536 00:10:00.583 } 00:10:00.583 ] 00:10:00.583 }' 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.583 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.152 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.152 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.152 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.152 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.152 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.152 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.152 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.152 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.152 23:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.152 23:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.152 [2024-11-02 23:49:54.987276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.152 "name": "Existed_Raid", 00:10:01.152 "aliases": [ 00:10:01.152 "41db65d3-38e2-4741-b004-55e2144f9611" 00:10:01.152 ], 00:10:01.152 "product_name": "Raid Volume", 00:10:01.152 "block_size": 512, 00:10:01.152 "num_blocks": 65536, 00:10:01.152 "uuid": "41db65d3-38e2-4741-b004-55e2144f9611", 00:10:01.152 "assigned_rate_limits": { 00:10:01.152 "rw_ios_per_sec": 0, 00:10:01.152 "rw_mbytes_per_sec": 0, 00:10:01.152 "r_mbytes_per_sec": 0, 00:10:01.152 "w_mbytes_per_sec": 0 00:10:01.152 }, 00:10:01.152 "claimed": false, 00:10:01.152 "zoned": false, 00:10:01.152 "supported_io_types": { 00:10:01.152 "read": true, 00:10:01.152 "write": true, 00:10:01.152 "unmap": false, 00:10:01.152 "flush": false, 00:10:01.152 "reset": true, 00:10:01.152 "nvme_admin": false, 00:10:01.152 "nvme_io": false, 00:10:01.152 "nvme_io_md": false, 00:10:01.152 "write_zeroes": true, 00:10:01.152 "zcopy": false, 00:10:01.152 "get_zone_info": false, 00:10:01.152 "zone_management": false, 00:10:01.152 "zone_append": false, 00:10:01.152 "compare": false, 00:10:01.152 "compare_and_write": false, 00:10:01.152 "abort": false, 00:10:01.152 "seek_hole": false, 00:10:01.152 "seek_data": false, 00:10:01.152 "copy": false, 00:10:01.152 "nvme_iov_md": false 00:10:01.152 }, 00:10:01.152 "memory_domains": [ 00:10:01.152 { 00:10:01.152 "dma_device_id": "system", 00:10:01.152 "dma_device_type": 1 00:10:01.152 }, 00:10:01.152 { 00:10:01.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.152 "dma_device_type": 2 00:10:01.152 }, 00:10:01.152 { 00:10:01.152 "dma_device_id": "system", 00:10:01.152 "dma_device_type": 1 00:10:01.152 }, 00:10:01.152 { 00:10:01.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.152 "dma_device_type": 2 00:10:01.152 }, 00:10:01.152 { 00:10:01.152 "dma_device_id": "system", 00:10:01.152 "dma_device_type": 1 00:10:01.152 }, 00:10:01.152 { 00:10:01.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.152 "dma_device_type": 2 00:10:01.152 }, 00:10:01.152 { 00:10:01.152 "dma_device_id": "system", 00:10:01.152 "dma_device_type": 1 00:10:01.152 }, 00:10:01.152 { 00:10:01.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.152 "dma_device_type": 2 00:10:01.152 } 00:10:01.152 ], 00:10:01.152 "driver_specific": { 00:10:01.152 "raid": { 00:10:01.152 "uuid": "41db65d3-38e2-4741-b004-55e2144f9611", 00:10:01.152 "strip_size_kb": 0, 00:10:01.152 "state": "online", 00:10:01.152 "raid_level": "raid1", 00:10:01.152 "superblock": false, 00:10:01.152 "num_base_bdevs": 4, 00:10:01.152 "num_base_bdevs_discovered": 4, 00:10:01.152 "num_base_bdevs_operational": 4, 00:10:01.152 "base_bdevs_list": [ 00:10:01.152 { 00:10:01.152 "name": "BaseBdev1", 00:10:01.152 "uuid": "a0de8342-0d8c-4ea9-8625-74f62cfcbfc6", 00:10:01.152 "is_configured": true, 00:10:01.152 "data_offset": 0, 00:10:01.152 "data_size": 65536 00:10:01.152 }, 00:10:01.152 { 00:10:01.152 "name": "BaseBdev2", 00:10:01.152 "uuid": "5fc23aec-205b-4a4b-9c9b-5286e9994463", 00:10:01.152 "is_configured": true, 00:10:01.152 "data_offset": 0, 00:10:01.152 "data_size": 65536 00:10:01.152 }, 00:10:01.152 { 00:10:01.152 "name": "BaseBdev3", 00:10:01.152 "uuid": "0ffaf685-b600-4bc3-bc58-86f766430199", 00:10:01.152 "is_configured": true, 00:10:01.152 "data_offset": 0, 00:10:01.152 "data_size": 65536 00:10:01.152 }, 00:10:01.152 { 00:10:01.152 "name": "BaseBdev4", 00:10:01.152 "uuid": "033a8e2d-f0bd-4429-b542-beb9f2cbc63b", 00:10:01.152 "is_configured": true, 00:10:01.152 "data_offset": 0, 00:10:01.152 "data_size": 65536 00:10:01.152 } 00:10:01.152 ] 00:10:01.152 } 00:10:01.152 } 00:10:01.152 }' 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.152 BaseBdev2 00:10:01.152 BaseBdev3 00:10:01.152 BaseBdev4' 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.152 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.411 [2024-11-02 23:49:55.314436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.411 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.411 "name": "Existed_Raid", 00:10:01.411 "uuid": "41db65d3-38e2-4741-b004-55e2144f9611", 00:10:01.411 "strip_size_kb": 0, 00:10:01.411 "state": "online", 00:10:01.411 "raid_level": "raid1", 00:10:01.411 "superblock": false, 00:10:01.411 "num_base_bdevs": 4, 00:10:01.411 "num_base_bdevs_discovered": 3, 00:10:01.411 "num_base_bdevs_operational": 3, 00:10:01.411 "base_bdevs_list": [ 00:10:01.411 { 00:10:01.411 "name": null, 00:10:01.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.411 "is_configured": false, 00:10:01.411 "data_offset": 0, 00:10:01.411 "data_size": 65536 00:10:01.411 }, 00:10:01.411 { 00:10:01.411 "name": "BaseBdev2", 00:10:01.411 "uuid": "5fc23aec-205b-4a4b-9c9b-5286e9994463", 00:10:01.411 "is_configured": true, 00:10:01.411 "data_offset": 0, 00:10:01.411 "data_size": 65536 00:10:01.411 }, 00:10:01.411 { 00:10:01.412 "name": "BaseBdev3", 00:10:01.412 "uuid": "0ffaf685-b600-4bc3-bc58-86f766430199", 00:10:01.412 "is_configured": true, 00:10:01.412 "data_offset": 0, 00:10:01.412 "data_size": 65536 00:10:01.412 }, 00:10:01.412 { 00:10:01.412 "name": "BaseBdev4", 00:10:01.412 "uuid": "033a8e2d-f0bd-4429-b542-beb9f2cbc63b", 00:10:01.412 "is_configured": true, 00:10:01.412 "data_offset": 0, 00:10:01.412 "data_size": 65536 00:10:01.412 } 00:10:01.412 ] 00:10:01.412 }' 00:10:01.412 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.412 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.978 [2024-11-02 23:49:55.821167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.978 [2024-11-02 23:49:55.880403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.978 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.978 [2024-11-02 23:49:55.951640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:01.978 [2024-11-02 23:49:55.951735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.978 [2024-11-02 23:49:55.963478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.979 [2024-11-02 23:49:55.963610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.979 [2024-11-02 23:49:55.963628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:01.979 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.979 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.979 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.979 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.979 23:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.979 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.979 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.979 23:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.979 BaseBdev2 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.979 [ 00:10:01.979 { 00:10:01.979 "name": "BaseBdev2", 00:10:01.979 "aliases": [ 00:10:01.979 "829efdf5-58c1-4c54-a543-261cd68e99e7" 00:10:01.979 ], 00:10:01.979 "product_name": "Malloc disk", 00:10:01.979 "block_size": 512, 00:10:01.979 "num_blocks": 65536, 00:10:01.979 "uuid": "829efdf5-58c1-4c54-a543-261cd68e99e7", 00:10:01.979 "assigned_rate_limits": { 00:10:01.979 "rw_ios_per_sec": 0, 00:10:01.979 "rw_mbytes_per_sec": 0, 00:10:01.979 "r_mbytes_per_sec": 0, 00:10:01.979 "w_mbytes_per_sec": 0 00:10:01.979 }, 00:10:01.979 "claimed": false, 00:10:01.979 "zoned": false, 00:10:01.979 "supported_io_types": { 00:10:01.979 "read": true, 00:10:01.979 "write": true, 00:10:01.979 "unmap": true, 00:10:01.979 "flush": true, 00:10:01.979 "reset": true, 00:10:01.979 "nvme_admin": false, 00:10:01.979 "nvme_io": false, 00:10:01.979 "nvme_io_md": false, 00:10:01.979 "write_zeroes": true, 00:10:01.979 "zcopy": true, 00:10:01.979 "get_zone_info": false, 00:10:01.979 "zone_management": false, 00:10:01.979 "zone_append": false, 00:10:01.979 "compare": false, 00:10:01.979 "compare_and_write": false, 00:10:01.979 "abort": true, 00:10:01.979 "seek_hole": false, 00:10:01.979 "seek_data": false, 00:10:01.979 "copy": true, 00:10:01.979 "nvme_iov_md": false 00:10:01.979 }, 00:10:01.979 "memory_domains": [ 00:10:01.979 { 00:10:01.979 "dma_device_id": "system", 00:10:01.979 "dma_device_type": 1 00:10:01.979 }, 00:10:01.979 { 00:10:01.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.979 "dma_device_type": 2 00:10:01.979 } 00:10:01.979 ], 00:10:01.979 "driver_specific": {} 00:10:01.979 } 00:10:01.979 ] 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.979 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.239 BaseBdev3 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.239 [ 00:10:02.239 { 00:10:02.239 "name": "BaseBdev3", 00:10:02.239 "aliases": [ 00:10:02.239 "e4403eb0-016d-4e35-bcb7-42aff926af46" 00:10:02.239 ], 00:10:02.239 "product_name": "Malloc disk", 00:10:02.239 "block_size": 512, 00:10:02.239 "num_blocks": 65536, 00:10:02.239 "uuid": "e4403eb0-016d-4e35-bcb7-42aff926af46", 00:10:02.239 "assigned_rate_limits": { 00:10:02.239 "rw_ios_per_sec": 0, 00:10:02.239 "rw_mbytes_per_sec": 0, 00:10:02.239 "r_mbytes_per_sec": 0, 00:10:02.239 "w_mbytes_per_sec": 0 00:10:02.239 }, 00:10:02.239 "claimed": false, 00:10:02.239 "zoned": false, 00:10:02.239 "supported_io_types": { 00:10:02.239 "read": true, 00:10:02.239 "write": true, 00:10:02.239 "unmap": true, 00:10:02.239 "flush": true, 00:10:02.239 "reset": true, 00:10:02.239 "nvme_admin": false, 00:10:02.239 "nvme_io": false, 00:10:02.239 "nvme_io_md": false, 00:10:02.239 "write_zeroes": true, 00:10:02.239 "zcopy": true, 00:10:02.239 "get_zone_info": false, 00:10:02.239 "zone_management": false, 00:10:02.239 "zone_append": false, 00:10:02.239 "compare": false, 00:10:02.239 "compare_and_write": false, 00:10:02.239 "abort": true, 00:10:02.239 "seek_hole": false, 00:10:02.239 "seek_data": false, 00:10:02.239 "copy": true, 00:10:02.239 "nvme_iov_md": false 00:10:02.239 }, 00:10:02.239 "memory_domains": [ 00:10:02.239 { 00:10:02.239 "dma_device_id": "system", 00:10:02.239 "dma_device_type": 1 00:10:02.239 }, 00:10:02.239 { 00:10:02.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.239 "dma_device_type": 2 00:10:02.239 } 00:10:02.239 ], 00:10:02.239 "driver_specific": {} 00:10:02.239 } 00:10:02.239 ] 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.239 BaseBdev4 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.239 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.240 [ 00:10:02.240 { 00:10:02.240 "name": "BaseBdev4", 00:10:02.240 "aliases": [ 00:10:02.240 "2f7b3e13-a7b6-4a08-b63f-29d55cac2198" 00:10:02.240 ], 00:10:02.240 "product_name": "Malloc disk", 00:10:02.240 "block_size": 512, 00:10:02.240 "num_blocks": 65536, 00:10:02.240 "uuid": "2f7b3e13-a7b6-4a08-b63f-29d55cac2198", 00:10:02.240 "assigned_rate_limits": { 00:10:02.240 "rw_ios_per_sec": 0, 00:10:02.240 "rw_mbytes_per_sec": 0, 00:10:02.240 "r_mbytes_per_sec": 0, 00:10:02.240 "w_mbytes_per_sec": 0 00:10:02.240 }, 00:10:02.240 "claimed": false, 00:10:02.240 "zoned": false, 00:10:02.240 "supported_io_types": { 00:10:02.240 "read": true, 00:10:02.240 "write": true, 00:10:02.240 "unmap": true, 00:10:02.240 "flush": true, 00:10:02.240 "reset": true, 00:10:02.240 "nvme_admin": false, 00:10:02.240 "nvme_io": false, 00:10:02.240 "nvme_io_md": false, 00:10:02.240 "write_zeroes": true, 00:10:02.240 "zcopy": true, 00:10:02.240 "get_zone_info": false, 00:10:02.240 "zone_management": false, 00:10:02.240 "zone_append": false, 00:10:02.240 "compare": false, 00:10:02.240 "compare_and_write": false, 00:10:02.240 "abort": true, 00:10:02.240 "seek_hole": false, 00:10:02.240 "seek_data": false, 00:10:02.240 "copy": true, 00:10:02.240 "nvme_iov_md": false 00:10:02.240 }, 00:10:02.240 "memory_domains": [ 00:10:02.240 { 00:10:02.240 "dma_device_id": "system", 00:10:02.240 "dma_device_type": 1 00:10:02.240 }, 00:10:02.240 { 00:10:02.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.240 "dma_device_type": 2 00:10:02.240 } 00:10:02.240 ], 00:10:02.240 "driver_specific": {} 00:10:02.240 } 00:10:02.240 ] 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.240 [2024-11-02 23:49:56.185042] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.240 [2024-11-02 23:49:56.185145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.240 [2024-11-02 23:49:56.185191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.240 [2024-11-02 23:49:56.187112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.240 [2024-11-02 23:49:56.187197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.240 "name": "Existed_Raid", 00:10:02.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.240 "strip_size_kb": 0, 00:10:02.240 "state": "configuring", 00:10:02.240 "raid_level": "raid1", 00:10:02.240 "superblock": false, 00:10:02.240 "num_base_bdevs": 4, 00:10:02.240 "num_base_bdevs_discovered": 3, 00:10:02.240 "num_base_bdevs_operational": 4, 00:10:02.240 "base_bdevs_list": [ 00:10:02.240 { 00:10:02.240 "name": "BaseBdev1", 00:10:02.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.240 "is_configured": false, 00:10:02.240 "data_offset": 0, 00:10:02.240 "data_size": 0 00:10:02.240 }, 00:10:02.240 { 00:10:02.240 "name": "BaseBdev2", 00:10:02.240 "uuid": "829efdf5-58c1-4c54-a543-261cd68e99e7", 00:10:02.240 "is_configured": true, 00:10:02.240 "data_offset": 0, 00:10:02.240 "data_size": 65536 00:10:02.240 }, 00:10:02.240 { 00:10:02.240 "name": "BaseBdev3", 00:10:02.240 "uuid": "e4403eb0-016d-4e35-bcb7-42aff926af46", 00:10:02.240 "is_configured": true, 00:10:02.240 "data_offset": 0, 00:10:02.240 "data_size": 65536 00:10:02.240 }, 00:10:02.240 { 00:10:02.240 "name": "BaseBdev4", 00:10:02.240 "uuid": "2f7b3e13-a7b6-4a08-b63f-29d55cac2198", 00:10:02.240 "is_configured": true, 00:10:02.240 "data_offset": 0, 00:10:02.240 "data_size": 65536 00:10:02.240 } 00:10:02.240 ] 00:10:02.240 }' 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.240 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.810 [2024-11-02 23:49:56.608358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.810 "name": "Existed_Raid", 00:10:02.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.810 "strip_size_kb": 0, 00:10:02.810 "state": "configuring", 00:10:02.810 "raid_level": "raid1", 00:10:02.810 "superblock": false, 00:10:02.810 "num_base_bdevs": 4, 00:10:02.810 "num_base_bdevs_discovered": 2, 00:10:02.810 "num_base_bdevs_operational": 4, 00:10:02.810 "base_bdevs_list": [ 00:10:02.810 { 00:10:02.810 "name": "BaseBdev1", 00:10:02.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.810 "is_configured": false, 00:10:02.810 "data_offset": 0, 00:10:02.810 "data_size": 0 00:10:02.810 }, 00:10:02.810 { 00:10:02.810 "name": null, 00:10:02.810 "uuid": "829efdf5-58c1-4c54-a543-261cd68e99e7", 00:10:02.810 "is_configured": false, 00:10:02.810 "data_offset": 0, 00:10:02.810 "data_size": 65536 00:10:02.810 }, 00:10:02.810 { 00:10:02.810 "name": "BaseBdev3", 00:10:02.810 "uuid": "e4403eb0-016d-4e35-bcb7-42aff926af46", 00:10:02.810 "is_configured": true, 00:10:02.810 "data_offset": 0, 00:10:02.810 "data_size": 65536 00:10:02.810 }, 00:10:02.810 { 00:10:02.810 "name": "BaseBdev4", 00:10:02.810 "uuid": "2f7b3e13-a7b6-4a08-b63f-29d55cac2198", 00:10:02.810 "is_configured": true, 00:10:02.810 "data_offset": 0, 00:10:02.810 "data_size": 65536 00:10:02.810 } 00:10:02.810 ] 00:10:02.810 }' 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.810 23:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.080 [2024-11-02 23:49:57.070574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.080 BaseBdev1 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.080 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.080 [ 00:10:03.080 { 00:10:03.080 "name": "BaseBdev1", 00:10:03.080 "aliases": [ 00:10:03.081 "8b2ffee5-4ca9-43d7-b998-a8307f9aae97" 00:10:03.081 ], 00:10:03.081 "product_name": "Malloc disk", 00:10:03.081 "block_size": 512, 00:10:03.081 "num_blocks": 65536, 00:10:03.081 "uuid": "8b2ffee5-4ca9-43d7-b998-a8307f9aae97", 00:10:03.081 "assigned_rate_limits": { 00:10:03.081 "rw_ios_per_sec": 0, 00:10:03.081 "rw_mbytes_per_sec": 0, 00:10:03.081 "r_mbytes_per_sec": 0, 00:10:03.081 "w_mbytes_per_sec": 0 00:10:03.081 }, 00:10:03.081 "claimed": true, 00:10:03.081 "claim_type": "exclusive_write", 00:10:03.081 "zoned": false, 00:10:03.081 "supported_io_types": { 00:10:03.081 "read": true, 00:10:03.081 "write": true, 00:10:03.081 "unmap": true, 00:10:03.081 "flush": true, 00:10:03.081 "reset": true, 00:10:03.081 "nvme_admin": false, 00:10:03.081 "nvme_io": false, 00:10:03.081 "nvme_io_md": false, 00:10:03.081 "write_zeroes": true, 00:10:03.081 "zcopy": true, 00:10:03.081 "get_zone_info": false, 00:10:03.081 "zone_management": false, 00:10:03.081 "zone_append": false, 00:10:03.081 "compare": false, 00:10:03.081 "compare_and_write": false, 00:10:03.081 "abort": true, 00:10:03.081 "seek_hole": false, 00:10:03.081 "seek_data": false, 00:10:03.081 "copy": true, 00:10:03.081 "nvme_iov_md": false 00:10:03.081 }, 00:10:03.081 "memory_domains": [ 00:10:03.081 { 00:10:03.081 "dma_device_id": "system", 00:10:03.081 "dma_device_type": 1 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.081 "dma_device_type": 2 00:10:03.081 } 00:10:03.081 ], 00:10:03.081 "driver_specific": {} 00:10:03.081 } 00:10:03.081 ] 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.081 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.081 "name": "Existed_Raid", 00:10:03.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.081 "strip_size_kb": 0, 00:10:03.081 "state": "configuring", 00:10:03.081 "raid_level": "raid1", 00:10:03.081 "superblock": false, 00:10:03.081 "num_base_bdevs": 4, 00:10:03.081 "num_base_bdevs_discovered": 3, 00:10:03.081 "num_base_bdevs_operational": 4, 00:10:03.081 "base_bdevs_list": [ 00:10:03.081 { 00:10:03.081 "name": "BaseBdev1", 00:10:03.081 "uuid": "8b2ffee5-4ca9-43d7-b998-a8307f9aae97", 00:10:03.081 "is_configured": true, 00:10:03.081 "data_offset": 0, 00:10:03.081 "data_size": 65536 00:10:03.081 }, 00:10:03.081 { 00:10:03.081 "name": null, 00:10:03.082 "uuid": "829efdf5-58c1-4c54-a543-261cd68e99e7", 00:10:03.082 "is_configured": false, 00:10:03.082 "data_offset": 0, 00:10:03.082 "data_size": 65536 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "name": "BaseBdev3", 00:10:03.082 "uuid": "e4403eb0-016d-4e35-bcb7-42aff926af46", 00:10:03.082 "is_configured": true, 00:10:03.082 "data_offset": 0, 00:10:03.082 "data_size": 65536 00:10:03.082 }, 00:10:03.082 { 00:10:03.082 "name": "BaseBdev4", 00:10:03.082 "uuid": "2f7b3e13-a7b6-4a08-b63f-29d55cac2198", 00:10:03.082 "is_configured": true, 00:10:03.082 "data_offset": 0, 00:10:03.082 "data_size": 65536 00:10:03.082 } 00:10:03.082 ] 00:10:03.082 }' 00:10:03.082 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.082 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.655 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.655 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.655 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.655 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.656 [2024-11-02 23:49:57.621736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.656 "name": "Existed_Raid", 00:10:03.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.656 "strip_size_kb": 0, 00:10:03.656 "state": "configuring", 00:10:03.656 "raid_level": "raid1", 00:10:03.656 "superblock": false, 00:10:03.656 "num_base_bdevs": 4, 00:10:03.656 "num_base_bdevs_discovered": 2, 00:10:03.656 "num_base_bdevs_operational": 4, 00:10:03.656 "base_bdevs_list": [ 00:10:03.656 { 00:10:03.656 "name": "BaseBdev1", 00:10:03.656 "uuid": "8b2ffee5-4ca9-43d7-b998-a8307f9aae97", 00:10:03.656 "is_configured": true, 00:10:03.656 "data_offset": 0, 00:10:03.656 "data_size": 65536 00:10:03.656 }, 00:10:03.656 { 00:10:03.656 "name": null, 00:10:03.656 "uuid": "829efdf5-58c1-4c54-a543-261cd68e99e7", 00:10:03.656 "is_configured": false, 00:10:03.656 "data_offset": 0, 00:10:03.656 "data_size": 65536 00:10:03.656 }, 00:10:03.656 { 00:10:03.656 "name": null, 00:10:03.656 "uuid": "e4403eb0-016d-4e35-bcb7-42aff926af46", 00:10:03.656 "is_configured": false, 00:10:03.656 "data_offset": 0, 00:10:03.656 "data_size": 65536 00:10:03.656 }, 00:10:03.656 { 00:10:03.656 "name": "BaseBdev4", 00:10:03.656 "uuid": "2f7b3e13-a7b6-4a08-b63f-29d55cac2198", 00:10:03.656 "is_configured": true, 00:10:03.656 "data_offset": 0, 00:10:03.656 "data_size": 65536 00:10:03.656 } 00:10:03.656 ] 00:10:03.656 }' 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.656 23:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.225 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.225 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.225 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.225 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.225 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.225 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:04.225 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:04.225 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.225 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.225 [2024-11-02 23:49:58.104897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.226 "name": "Existed_Raid", 00:10:04.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.226 "strip_size_kb": 0, 00:10:04.226 "state": "configuring", 00:10:04.226 "raid_level": "raid1", 00:10:04.226 "superblock": false, 00:10:04.226 "num_base_bdevs": 4, 00:10:04.226 "num_base_bdevs_discovered": 3, 00:10:04.226 "num_base_bdevs_operational": 4, 00:10:04.226 "base_bdevs_list": [ 00:10:04.226 { 00:10:04.226 "name": "BaseBdev1", 00:10:04.226 "uuid": "8b2ffee5-4ca9-43d7-b998-a8307f9aae97", 00:10:04.226 "is_configured": true, 00:10:04.226 "data_offset": 0, 00:10:04.226 "data_size": 65536 00:10:04.226 }, 00:10:04.226 { 00:10:04.226 "name": null, 00:10:04.226 "uuid": "829efdf5-58c1-4c54-a543-261cd68e99e7", 00:10:04.226 "is_configured": false, 00:10:04.226 "data_offset": 0, 00:10:04.226 "data_size": 65536 00:10:04.226 }, 00:10:04.226 { 00:10:04.226 "name": "BaseBdev3", 00:10:04.226 "uuid": "e4403eb0-016d-4e35-bcb7-42aff926af46", 00:10:04.226 "is_configured": true, 00:10:04.226 "data_offset": 0, 00:10:04.226 "data_size": 65536 00:10:04.226 }, 00:10:04.226 { 00:10:04.226 "name": "BaseBdev4", 00:10:04.226 "uuid": "2f7b3e13-a7b6-4a08-b63f-29d55cac2198", 00:10:04.226 "is_configured": true, 00:10:04.226 "data_offset": 0, 00:10:04.226 "data_size": 65536 00:10:04.226 } 00:10:04.226 ] 00:10:04.226 }' 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.226 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.486 [2024-11-02 23:49:58.552186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.486 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.746 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.746 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.746 "name": "Existed_Raid", 00:10:04.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.746 "strip_size_kb": 0, 00:10:04.746 "state": "configuring", 00:10:04.746 "raid_level": "raid1", 00:10:04.746 "superblock": false, 00:10:04.746 "num_base_bdevs": 4, 00:10:04.746 "num_base_bdevs_discovered": 2, 00:10:04.746 "num_base_bdevs_operational": 4, 00:10:04.746 "base_bdevs_list": [ 00:10:04.746 { 00:10:04.746 "name": null, 00:10:04.746 "uuid": "8b2ffee5-4ca9-43d7-b998-a8307f9aae97", 00:10:04.746 "is_configured": false, 00:10:04.746 "data_offset": 0, 00:10:04.746 "data_size": 65536 00:10:04.746 }, 00:10:04.746 { 00:10:04.746 "name": null, 00:10:04.746 "uuid": "829efdf5-58c1-4c54-a543-261cd68e99e7", 00:10:04.746 "is_configured": false, 00:10:04.746 "data_offset": 0, 00:10:04.746 "data_size": 65536 00:10:04.746 }, 00:10:04.746 { 00:10:04.746 "name": "BaseBdev3", 00:10:04.746 "uuid": "e4403eb0-016d-4e35-bcb7-42aff926af46", 00:10:04.746 "is_configured": true, 00:10:04.746 "data_offset": 0, 00:10:04.746 "data_size": 65536 00:10:04.746 }, 00:10:04.746 { 00:10:04.746 "name": "BaseBdev4", 00:10:04.746 "uuid": "2f7b3e13-a7b6-4a08-b63f-29d55cac2198", 00:10:04.746 "is_configured": true, 00:10:04.746 "data_offset": 0, 00:10:04.746 "data_size": 65536 00:10:04.746 } 00:10:04.746 ] 00:10:04.746 }' 00:10:04.746 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.746 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.004 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.004 23:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.004 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.004 23:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.004 [2024-11-02 23:49:59.037945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.004 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.005 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.005 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.005 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.005 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.005 "name": "Existed_Raid", 00:10:05.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.005 "strip_size_kb": 0, 00:10:05.005 "state": "configuring", 00:10:05.005 "raid_level": "raid1", 00:10:05.005 "superblock": false, 00:10:05.005 "num_base_bdevs": 4, 00:10:05.005 "num_base_bdevs_discovered": 3, 00:10:05.005 "num_base_bdevs_operational": 4, 00:10:05.005 "base_bdevs_list": [ 00:10:05.005 { 00:10:05.005 "name": null, 00:10:05.005 "uuid": "8b2ffee5-4ca9-43d7-b998-a8307f9aae97", 00:10:05.005 "is_configured": false, 00:10:05.005 "data_offset": 0, 00:10:05.005 "data_size": 65536 00:10:05.005 }, 00:10:05.005 { 00:10:05.005 "name": "BaseBdev2", 00:10:05.005 "uuid": "829efdf5-58c1-4c54-a543-261cd68e99e7", 00:10:05.005 "is_configured": true, 00:10:05.005 "data_offset": 0, 00:10:05.005 "data_size": 65536 00:10:05.005 }, 00:10:05.005 { 00:10:05.005 "name": "BaseBdev3", 00:10:05.005 "uuid": "e4403eb0-016d-4e35-bcb7-42aff926af46", 00:10:05.005 "is_configured": true, 00:10:05.005 "data_offset": 0, 00:10:05.005 "data_size": 65536 00:10:05.005 }, 00:10:05.005 { 00:10:05.005 "name": "BaseBdev4", 00:10:05.005 "uuid": "2f7b3e13-a7b6-4a08-b63f-29d55cac2198", 00:10:05.005 "is_configured": true, 00:10:05.005 "data_offset": 0, 00:10:05.005 "data_size": 65536 00:10:05.005 } 00:10:05.005 ] 00:10:05.005 }' 00:10:05.005 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.005 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8b2ffee5-4ca9-43d7-b998-a8307f9aae97 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.572 [2024-11-02 23:49:59.572037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:05.572 [2024-11-02 23:49:59.572080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:05.572 [2024-11-02 23:49:59.572091] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:05.572 [2024-11-02 23:49:59.572343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:05.572 [2024-11-02 23:49:59.572479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:05.572 [2024-11-02 23:49:59.572489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:05.572 [2024-11-02 23:49:59.572675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.572 NewBaseBdev 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.572 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.572 [ 00:10:05.572 { 00:10:05.572 "name": "NewBaseBdev", 00:10:05.572 "aliases": [ 00:10:05.572 "8b2ffee5-4ca9-43d7-b998-a8307f9aae97" 00:10:05.572 ], 00:10:05.572 "product_name": "Malloc disk", 00:10:05.572 "block_size": 512, 00:10:05.572 "num_blocks": 65536, 00:10:05.572 "uuid": "8b2ffee5-4ca9-43d7-b998-a8307f9aae97", 00:10:05.572 "assigned_rate_limits": { 00:10:05.572 "rw_ios_per_sec": 0, 00:10:05.572 "rw_mbytes_per_sec": 0, 00:10:05.572 "r_mbytes_per_sec": 0, 00:10:05.573 "w_mbytes_per_sec": 0 00:10:05.573 }, 00:10:05.573 "claimed": true, 00:10:05.573 "claim_type": "exclusive_write", 00:10:05.573 "zoned": false, 00:10:05.573 "supported_io_types": { 00:10:05.573 "read": true, 00:10:05.573 "write": true, 00:10:05.573 "unmap": true, 00:10:05.573 "flush": true, 00:10:05.573 "reset": true, 00:10:05.573 "nvme_admin": false, 00:10:05.573 "nvme_io": false, 00:10:05.573 "nvme_io_md": false, 00:10:05.573 "write_zeroes": true, 00:10:05.573 "zcopy": true, 00:10:05.573 "get_zone_info": false, 00:10:05.573 "zone_management": false, 00:10:05.573 "zone_append": false, 00:10:05.573 "compare": false, 00:10:05.573 "compare_and_write": false, 00:10:05.573 "abort": true, 00:10:05.573 "seek_hole": false, 00:10:05.573 "seek_data": false, 00:10:05.573 "copy": true, 00:10:05.573 "nvme_iov_md": false 00:10:05.573 }, 00:10:05.573 "memory_domains": [ 00:10:05.573 { 00:10:05.573 "dma_device_id": "system", 00:10:05.573 "dma_device_type": 1 00:10:05.573 }, 00:10:05.573 { 00:10:05.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.573 "dma_device_type": 2 00:10:05.573 } 00:10:05.573 ], 00:10:05.573 "driver_specific": {} 00:10:05.573 } 00:10:05.573 ] 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.573 "name": "Existed_Raid", 00:10:05.573 "uuid": "dd03e31b-e46e-41f4-a45a-eda7e21f579b", 00:10:05.573 "strip_size_kb": 0, 00:10:05.573 "state": "online", 00:10:05.573 "raid_level": "raid1", 00:10:05.573 "superblock": false, 00:10:05.573 "num_base_bdevs": 4, 00:10:05.573 "num_base_bdevs_discovered": 4, 00:10:05.573 "num_base_bdevs_operational": 4, 00:10:05.573 "base_bdevs_list": [ 00:10:05.573 { 00:10:05.573 "name": "NewBaseBdev", 00:10:05.573 "uuid": "8b2ffee5-4ca9-43d7-b998-a8307f9aae97", 00:10:05.573 "is_configured": true, 00:10:05.573 "data_offset": 0, 00:10:05.573 "data_size": 65536 00:10:05.573 }, 00:10:05.573 { 00:10:05.573 "name": "BaseBdev2", 00:10:05.573 "uuid": "829efdf5-58c1-4c54-a543-261cd68e99e7", 00:10:05.573 "is_configured": true, 00:10:05.573 "data_offset": 0, 00:10:05.573 "data_size": 65536 00:10:05.573 }, 00:10:05.573 { 00:10:05.573 "name": "BaseBdev3", 00:10:05.573 "uuid": "e4403eb0-016d-4e35-bcb7-42aff926af46", 00:10:05.573 "is_configured": true, 00:10:05.573 "data_offset": 0, 00:10:05.573 "data_size": 65536 00:10:05.573 }, 00:10:05.573 { 00:10:05.573 "name": "BaseBdev4", 00:10:05.573 "uuid": "2f7b3e13-a7b6-4a08-b63f-29d55cac2198", 00:10:05.573 "is_configured": true, 00:10:05.573 "data_offset": 0, 00:10:05.573 "data_size": 65536 00:10:05.573 } 00:10:05.573 ] 00:10:05.573 }' 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.573 23:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.140 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.140 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.140 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.140 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.141 [2024-11-02 23:50:00.063643] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.141 "name": "Existed_Raid", 00:10:06.141 "aliases": [ 00:10:06.141 "dd03e31b-e46e-41f4-a45a-eda7e21f579b" 00:10:06.141 ], 00:10:06.141 "product_name": "Raid Volume", 00:10:06.141 "block_size": 512, 00:10:06.141 "num_blocks": 65536, 00:10:06.141 "uuid": "dd03e31b-e46e-41f4-a45a-eda7e21f579b", 00:10:06.141 "assigned_rate_limits": { 00:10:06.141 "rw_ios_per_sec": 0, 00:10:06.141 "rw_mbytes_per_sec": 0, 00:10:06.141 "r_mbytes_per_sec": 0, 00:10:06.141 "w_mbytes_per_sec": 0 00:10:06.141 }, 00:10:06.141 "claimed": false, 00:10:06.141 "zoned": false, 00:10:06.141 "supported_io_types": { 00:10:06.141 "read": true, 00:10:06.141 "write": true, 00:10:06.141 "unmap": false, 00:10:06.141 "flush": false, 00:10:06.141 "reset": true, 00:10:06.141 "nvme_admin": false, 00:10:06.141 "nvme_io": false, 00:10:06.141 "nvme_io_md": false, 00:10:06.141 "write_zeroes": true, 00:10:06.141 "zcopy": false, 00:10:06.141 "get_zone_info": false, 00:10:06.141 "zone_management": false, 00:10:06.141 "zone_append": false, 00:10:06.141 "compare": false, 00:10:06.141 "compare_and_write": false, 00:10:06.141 "abort": false, 00:10:06.141 "seek_hole": false, 00:10:06.141 "seek_data": false, 00:10:06.141 "copy": false, 00:10:06.141 "nvme_iov_md": false 00:10:06.141 }, 00:10:06.141 "memory_domains": [ 00:10:06.141 { 00:10:06.141 "dma_device_id": "system", 00:10:06.141 "dma_device_type": 1 00:10:06.141 }, 00:10:06.141 { 00:10:06.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.141 "dma_device_type": 2 00:10:06.141 }, 00:10:06.141 { 00:10:06.141 "dma_device_id": "system", 00:10:06.141 "dma_device_type": 1 00:10:06.141 }, 00:10:06.141 { 00:10:06.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.141 "dma_device_type": 2 00:10:06.141 }, 00:10:06.141 { 00:10:06.141 "dma_device_id": "system", 00:10:06.141 "dma_device_type": 1 00:10:06.141 }, 00:10:06.141 { 00:10:06.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.141 "dma_device_type": 2 00:10:06.141 }, 00:10:06.141 { 00:10:06.141 "dma_device_id": "system", 00:10:06.141 "dma_device_type": 1 00:10:06.141 }, 00:10:06.141 { 00:10:06.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.141 "dma_device_type": 2 00:10:06.141 } 00:10:06.141 ], 00:10:06.141 "driver_specific": { 00:10:06.141 "raid": { 00:10:06.141 "uuid": "dd03e31b-e46e-41f4-a45a-eda7e21f579b", 00:10:06.141 "strip_size_kb": 0, 00:10:06.141 "state": "online", 00:10:06.141 "raid_level": "raid1", 00:10:06.141 "superblock": false, 00:10:06.141 "num_base_bdevs": 4, 00:10:06.141 "num_base_bdevs_discovered": 4, 00:10:06.141 "num_base_bdevs_operational": 4, 00:10:06.141 "base_bdevs_list": [ 00:10:06.141 { 00:10:06.141 "name": "NewBaseBdev", 00:10:06.141 "uuid": "8b2ffee5-4ca9-43d7-b998-a8307f9aae97", 00:10:06.141 "is_configured": true, 00:10:06.141 "data_offset": 0, 00:10:06.141 "data_size": 65536 00:10:06.141 }, 00:10:06.141 { 00:10:06.141 "name": "BaseBdev2", 00:10:06.141 "uuid": "829efdf5-58c1-4c54-a543-261cd68e99e7", 00:10:06.141 "is_configured": true, 00:10:06.141 "data_offset": 0, 00:10:06.141 "data_size": 65536 00:10:06.141 }, 00:10:06.141 { 00:10:06.141 "name": "BaseBdev3", 00:10:06.141 "uuid": "e4403eb0-016d-4e35-bcb7-42aff926af46", 00:10:06.141 "is_configured": true, 00:10:06.141 "data_offset": 0, 00:10:06.141 "data_size": 65536 00:10:06.141 }, 00:10:06.141 { 00:10:06.141 "name": "BaseBdev4", 00:10:06.141 "uuid": "2f7b3e13-a7b6-4a08-b63f-29d55cac2198", 00:10:06.141 "is_configured": true, 00:10:06.141 "data_offset": 0, 00:10:06.141 "data_size": 65536 00:10:06.141 } 00:10:06.141 ] 00:10:06.141 } 00:10:06.141 } 00:10:06.141 }' 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:06.141 BaseBdev2 00:10:06.141 BaseBdev3 00:10:06.141 BaseBdev4' 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.141 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.401 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.402 [2024-11-02 23:50:00.366788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.402 [2024-11-02 23:50:00.366817] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.402 [2024-11-02 23:50:00.366913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.402 [2024-11-02 23:50:00.367200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.402 [2024-11-02 23:50:00.367217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83798 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83798 ']' 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83798 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83798 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:06.402 killing process with pid 83798 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83798' 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 83798 00:10:06.402 [2024-11-02 23:50:00.414545] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.402 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 83798 00:10:06.402 [2024-11-02 23:50:00.455989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:06.662 00:10:06.662 real 0m9.454s 00:10:06.662 user 0m16.104s 00:10:06.662 sys 0m2.043s 00:10:06.662 ************************************ 00:10:06.662 END TEST raid_state_function_test 00:10:06.662 ************************************ 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.662 23:50:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:06.662 23:50:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:06.662 23:50:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.662 23:50:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.662 ************************************ 00:10:06.662 START TEST raid_state_function_test_sb 00:10:06.662 ************************************ 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:06.662 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:06.921 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:06.921 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:06.921 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84453 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84453' 00:10:06.922 Process raid pid: 84453 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84453 00:10:06.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 84453 ']' 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:06.922 23:50:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.922 [2024-11-02 23:50:00.846391] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:06.922 [2024-11-02 23:50:00.846535] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.922 [2024-11-02 23:50:01.005647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.180 [2024-11-02 23:50:01.032289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.180 [2024-11-02 23:50:01.074944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.180 [2024-11-02 23:50:01.074982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.748 [2024-11-02 23:50:01.696205] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.748 [2024-11-02 23:50:01.696265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.748 [2024-11-02 23:50:01.696277] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.748 [2024-11-02 23:50:01.696286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.748 [2024-11-02 23:50:01.696292] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.748 [2024-11-02 23:50:01.696303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.748 [2024-11-02 23:50:01.696309] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:07.748 [2024-11-02 23:50:01.696317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.748 "name": "Existed_Raid", 00:10:07.748 "uuid": "ea7c2531-f5fc-436d-b139-d1e91eff51a3", 00:10:07.748 "strip_size_kb": 0, 00:10:07.748 "state": "configuring", 00:10:07.748 "raid_level": "raid1", 00:10:07.748 "superblock": true, 00:10:07.748 "num_base_bdevs": 4, 00:10:07.748 "num_base_bdevs_discovered": 0, 00:10:07.748 "num_base_bdevs_operational": 4, 00:10:07.748 "base_bdevs_list": [ 00:10:07.748 { 00:10:07.748 "name": "BaseBdev1", 00:10:07.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.748 "is_configured": false, 00:10:07.748 "data_offset": 0, 00:10:07.748 "data_size": 0 00:10:07.748 }, 00:10:07.748 { 00:10:07.748 "name": "BaseBdev2", 00:10:07.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.748 "is_configured": false, 00:10:07.748 "data_offset": 0, 00:10:07.748 "data_size": 0 00:10:07.748 }, 00:10:07.748 { 00:10:07.748 "name": "BaseBdev3", 00:10:07.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.748 "is_configured": false, 00:10:07.748 "data_offset": 0, 00:10:07.748 "data_size": 0 00:10:07.748 }, 00:10:07.748 { 00:10:07.748 "name": "BaseBdev4", 00:10:07.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.748 "is_configured": false, 00:10:07.748 "data_offset": 0, 00:10:07.748 "data_size": 0 00:10:07.748 } 00:10:07.748 ] 00:10:07.748 }' 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.748 23:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.323 [2024-11-02 23:50:02.131395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.323 [2024-11-02 23:50:02.131508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.323 [2024-11-02 23:50:02.143391] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.323 [2024-11-02 23:50:02.143478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.323 [2024-11-02 23:50:02.143510] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.323 [2024-11-02 23:50:02.143536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.323 [2024-11-02 23:50:02.143568] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.323 [2024-11-02 23:50:02.143589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.323 [2024-11-02 23:50:02.143668] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.323 [2024-11-02 23:50:02.143692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.323 [2024-11-02 23:50:02.164314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.323 BaseBdev1 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.323 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.323 [ 00:10:08.323 { 00:10:08.323 "name": "BaseBdev1", 00:10:08.323 "aliases": [ 00:10:08.323 "013d78fe-5dc5-4191-ac32-1fead1a869ec" 00:10:08.323 ], 00:10:08.323 "product_name": "Malloc disk", 00:10:08.323 "block_size": 512, 00:10:08.323 "num_blocks": 65536, 00:10:08.323 "uuid": "013d78fe-5dc5-4191-ac32-1fead1a869ec", 00:10:08.323 "assigned_rate_limits": { 00:10:08.323 "rw_ios_per_sec": 0, 00:10:08.323 "rw_mbytes_per_sec": 0, 00:10:08.323 "r_mbytes_per_sec": 0, 00:10:08.323 "w_mbytes_per_sec": 0 00:10:08.323 }, 00:10:08.323 "claimed": true, 00:10:08.324 "claim_type": "exclusive_write", 00:10:08.324 "zoned": false, 00:10:08.324 "supported_io_types": { 00:10:08.324 "read": true, 00:10:08.324 "write": true, 00:10:08.324 "unmap": true, 00:10:08.324 "flush": true, 00:10:08.324 "reset": true, 00:10:08.324 "nvme_admin": false, 00:10:08.324 "nvme_io": false, 00:10:08.324 "nvme_io_md": false, 00:10:08.324 "write_zeroes": true, 00:10:08.324 "zcopy": true, 00:10:08.324 "get_zone_info": false, 00:10:08.324 "zone_management": false, 00:10:08.324 "zone_append": false, 00:10:08.324 "compare": false, 00:10:08.324 "compare_and_write": false, 00:10:08.324 "abort": true, 00:10:08.324 "seek_hole": false, 00:10:08.324 "seek_data": false, 00:10:08.324 "copy": true, 00:10:08.324 "nvme_iov_md": false 00:10:08.324 }, 00:10:08.324 "memory_domains": [ 00:10:08.324 { 00:10:08.324 "dma_device_id": "system", 00:10:08.324 "dma_device_type": 1 00:10:08.324 }, 00:10:08.324 { 00:10:08.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.324 "dma_device_type": 2 00:10:08.324 } 00:10:08.324 ], 00:10:08.324 "driver_specific": {} 00:10:08.324 } 00:10:08.324 ] 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.324 "name": "Existed_Raid", 00:10:08.324 "uuid": "d2397023-756a-4ff1-b5f1-4349c764041b", 00:10:08.324 "strip_size_kb": 0, 00:10:08.324 "state": "configuring", 00:10:08.324 "raid_level": "raid1", 00:10:08.324 "superblock": true, 00:10:08.324 "num_base_bdevs": 4, 00:10:08.324 "num_base_bdevs_discovered": 1, 00:10:08.324 "num_base_bdevs_operational": 4, 00:10:08.324 "base_bdevs_list": [ 00:10:08.324 { 00:10:08.324 "name": "BaseBdev1", 00:10:08.324 "uuid": "013d78fe-5dc5-4191-ac32-1fead1a869ec", 00:10:08.324 "is_configured": true, 00:10:08.324 "data_offset": 2048, 00:10:08.324 "data_size": 63488 00:10:08.324 }, 00:10:08.324 { 00:10:08.324 "name": "BaseBdev2", 00:10:08.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.324 "is_configured": false, 00:10:08.324 "data_offset": 0, 00:10:08.324 "data_size": 0 00:10:08.324 }, 00:10:08.324 { 00:10:08.324 "name": "BaseBdev3", 00:10:08.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.324 "is_configured": false, 00:10:08.324 "data_offset": 0, 00:10:08.324 "data_size": 0 00:10:08.324 }, 00:10:08.324 { 00:10:08.324 "name": "BaseBdev4", 00:10:08.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.324 "is_configured": false, 00:10:08.324 "data_offset": 0, 00:10:08.324 "data_size": 0 00:10:08.324 } 00:10:08.324 ] 00:10:08.324 }' 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.324 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.584 [2024-11-02 23:50:02.639550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.584 [2024-11-02 23:50:02.639654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.584 [2024-11-02 23:50:02.651554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.584 [2024-11-02 23:50:02.653342] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.584 [2024-11-02 23:50:02.653386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.584 [2024-11-02 23:50:02.653395] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.584 [2024-11-02 23:50:02.653405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.584 [2024-11-02 23:50:02.653411] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.584 [2024-11-02 23:50:02.653419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.584 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.844 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.844 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.844 "name": "Existed_Raid", 00:10:08.844 "uuid": "146922ee-418b-4aeb-91c3-677a676223d2", 00:10:08.844 "strip_size_kb": 0, 00:10:08.844 "state": "configuring", 00:10:08.844 "raid_level": "raid1", 00:10:08.844 "superblock": true, 00:10:08.844 "num_base_bdevs": 4, 00:10:08.844 "num_base_bdevs_discovered": 1, 00:10:08.844 "num_base_bdevs_operational": 4, 00:10:08.844 "base_bdevs_list": [ 00:10:08.844 { 00:10:08.844 "name": "BaseBdev1", 00:10:08.844 "uuid": "013d78fe-5dc5-4191-ac32-1fead1a869ec", 00:10:08.844 "is_configured": true, 00:10:08.844 "data_offset": 2048, 00:10:08.844 "data_size": 63488 00:10:08.844 }, 00:10:08.844 { 00:10:08.844 "name": "BaseBdev2", 00:10:08.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.844 "is_configured": false, 00:10:08.844 "data_offset": 0, 00:10:08.844 "data_size": 0 00:10:08.844 }, 00:10:08.844 { 00:10:08.844 "name": "BaseBdev3", 00:10:08.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.844 "is_configured": false, 00:10:08.844 "data_offset": 0, 00:10:08.844 "data_size": 0 00:10:08.844 }, 00:10:08.844 { 00:10:08.844 "name": "BaseBdev4", 00:10:08.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.844 "is_configured": false, 00:10:08.844 "data_offset": 0, 00:10:08.844 "data_size": 0 00:10:08.844 } 00:10:08.844 ] 00:10:08.844 }' 00:10:08.844 23:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.844 23:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.104 [2024-11-02 23:50:03.133686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.104 BaseBdev2 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.104 [ 00:10:09.104 { 00:10:09.104 "name": "BaseBdev2", 00:10:09.104 "aliases": [ 00:10:09.104 "7253e576-ee0b-4a1b-8195-eeebfc2e2a9e" 00:10:09.104 ], 00:10:09.104 "product_name": "Malloc disk", 00:10:09.104 "block_size": 512, 00:10:09.104 "num_blocks": 65536, 00:10:09.104 "uuid": "7253e576-ee0b-4a1b-8195-eeebfc2e2a9e", 00:10:09.104 "assigned_rate_limits": { 00:10:09.104 "rw_ios_per_sec": 0, 00:10:09.104 "rw_mbytes_per_sec": 0, 00:10:09.104 "r_mbytes_per_sec": 0, 00:10:09.104 "w_mbytes_per_sec": 0 00:10:09.104 }, 00:10:09.104 "claimed": true, 00:10:09.104 "claim_type": "exclusive_write", 00:10:09.104 "zoned": false, 00:10:09.104 "supported_io_types": { 00:10:09.104 "read": true, 00:10:09.104 "write": true, 00:10:09.104 "unmap": true, 00:10:09.104 "flush": true, 00:10:09.104 "reset": true, 00:10:09.104 "nvme_admin": false, 00:10:09.104 "nvme_io": false, 00:10:09.104 "nvme_io_md": false, 00:10:09.104 "write_zeroes": true, 00:10:09.104 "zcopy": true, 00:10:09.104 "get_zone_info": false, 00:10:09.104 "zone_management": false, 00:10:09.104 "zone_append": false, 00:10:09.104 "compare": false, 00:10:09.104 "compare_and_write": false, 00:10:09.104 "abort": true, 00:10:09.104 "seek_hole": false, 00:10:09.104 "seek_data": false, 00:10:09.104 "copy": true, 00:10:09.104 "nvme_iov_md": false 00:10:09.104 }, 00:10:09.104 "memory_domains": [ 00:10:09.104 { 00:10:09.104 "dma_device_id": "system", 00:10:09.104 "dma_device_type": 1 00:10:09.104 }, 00:10:09.104 { 00:10:09.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.104 "dma_device_type": 2 00:10:09.104 } 00:10:09.104 ], 00:10:09.104 "driver_specific": {} 00:10:09.104 } 00:10:09.104 ] 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.104 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.105 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.105 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.105 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.105 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.105 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.363 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.363 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.363 "name": "Existed_Raid", 00:10:09.363 "uuid": "146922ee-418b-4aeb-91c3-677a676223d2", 00:10:09.363 "strip_size_kb": 0, 00:10:09.363 "state": "configuring", 00:10:09.363 "raid_level": "raid1", 00:10:09.363 "superblock": true, 00:10:09.363 "num_base_bdevs": 4, 00:10:09.363 "num_base_bdevs_discovered": 2, 00:10:09.363 "num_base_bdevs_operational": 4, 00:10:09.363 "base_bdevs_list": [ 00:10:09.363 { 00:10:09.363 "name": "BaseBdev1", 00:10:09.363 "uuid": "013d78fe-5dc5-4191-ac32-1fead1a869ec", 00:10:09.363 "is_configured": true, 00:10:09.363 "data_offset": 2048, 00:10:09.363 "data_size": 63488 00:10:09.363 }, 00:10:09.363 { 00:10:09.363 "name": "BaseBdev2", 00:10:09.363 "uuid": "7253e576-ee0b-4a1b-8195-eeebfc2e2a9e", 00:10:09.363 "is_configured": true, 00:10:09.363 "data_offset": 2048, 00:10:09.363 "data_size": 63488 00:10:09.363 }, 00:10:09.363 { 00:10:09.363 "name": "BaseBdev3", 00:10:09.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.363 "is_configured": false, 00:10:09.363 "data_offset": 0, 00:10:09.363 "data_size": 0 00:10:09.363 }, 00:10:09.363 { 00:10:09.363 "name": "BaseBdev4", 00:10:09.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.363 "is_configured": false, 00:10:09.363 "data_offset": 0, 00:10:09.363 "data_size": 0 00:10:09.363 } 00:10:09.363 ] 00:10:09.363 }' 00:10:09.363 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.363 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.633 [2024-11-02 23:50:03.656226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.633 BaseBdev3 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.633 [ 00:10:09.633 { 00:10:09.633 "name": "BaseBdev3", 00:10:09.633 "aliases": [ 00:10:09.633 "3800907b-0de4-4b8e-859e-19061a74caab" 00:10:09.633 ], 00:10:09.633 "product_name": "Malloc disk", 00:10:09.633 "block_size": 512, 00:10:09.633 "num_blocks": 65536, 00:10:09.633 "uuid": "3800907b-0de4-4b8e-859e-19061a74caab", 00:10:09.633 "assigned_rate_limits": { 00:10:09.633 "rw_ios_per_sec": 0, 00:10:09.633 "rw_mbytes_per_sec": 0, 00:10:09.633 "r_mbytes_per_sec": 0, 00:10:09.633 "w_mbytes_per_sec": 0 00:10:09.633 }, 00:10:09.633 "claimed": true, 00:10:09.633 "claim_type": "exclusive_write", 00:10:09.633 "zoned": false, 00:10:09.633 "supported_io_types": { 00:10:09.633 "read": true, 00:10:09.633 "write": true, 00:10:09.633 "unmap": true, 00:10:09.633 "flush": true, 00:10:09.633 "reset": true, 00:10:09.633 "nvme_admin": false, 00:10:09.633 "nvme_io": false, 00:10:09.633 "nvme_io_md": false, 00:10:09.633 "write_zeroes": true, 00:10:09.633 "zcopy": true, 00:10:09.633 "get_zone_info": false, 00:10:09.633 "zone_management": false, 00:10:09.633 "zone_append": false, 00:10:09.633 "compare": false, 00:10:09.633 "compare_and_write": false, 00:10:09.633 "abort": true, 00:10:09.633 "seek_hole": false, 00:10:09.633 "seek_data": false, 00:10:09.633 "copy": true, 00:10:09.633 "nvme_iov_md": false 00:10:09.633 }, 00:10:09.633 "memory_domains": [ 00:10:09.633 { 00:10:09.633 "dma_device_id": "system", 00:10:09.633 "dma_device_type": 1 00:10:09.633 }, 00:10:09.633 { 00:10:09.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.633 "dma_device_type": 2 00:10:09.633 } 00:10:09.633 ], 00:10:09.633 "driver_specific": {} 00:10:09.633 } 00:10:09.633 ] 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.633 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.921 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.921 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.921 "name": "Existed_Raid", 00:10:09.921 "uuid": "146922ee-418b-4aeb-91c3-677a676223d2", 00:10:09.921 "strip_size_kb": 0, 00:10:09.921 "state": "configuring", 00:10:09.921 "raid_level": "raid1", 00:10:09.921 "superblock": true, 00:10:09.921 "num_base_bdevs": 4, 00:10:09.921 "num_base_bdevs_discovered": 3, 00:10:09.921 "num_base_bdevs_operational": 4, 00:10:09.921 "base_bdevs_list": [ 00:10:09.921 { 00:10:09.921 "name": "BaseBdev1", 00:10:09.921 "uuid": "013d78fe-5dc5-4191-ac32-1fead1a869ec", 00:10:09.921 "is_configured": true, 00:10:09.921 "data_offset": 2048, 00:10:09.921 "data_size": 63488 00:10:09.921 }, 00:10:09.921 { 00:10:09.921 "name": "BaseBdev2", 00:10:09.921 "uuid": "7253e576-ee0b-4a1b-8195-eeebfc2e2a9e", 00:10:09.921 "is_configured": true, 00:10:09.921 "data_offset": 2048, 00:10:09.921 "data_size": 63488 00:10:09.921 }, 00:10:09.921 { 00:10:09.921 "name": "BaseBdev3", 00:10:09.921 "uuid": "3800907b-0de4-4b8e-859e-19061a74caab", 00:10:09.921 "is_configured": true, 00:10:09.921 "data_offset": 2048, 00:10:09.921 "data_size": 63488 00:10:09.921 }, 00:10:09.921 { 00:10:09.921 "name": "BaseBdev4", 00:10:09.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.921 "is_configured": false, 00:10:09.921 "data_offset": 0, 00:10:09.921 "data_size": 0 00:10:09.921 } 00:10:09.921 ] 00:10:09.921 }' 00:10:09.921 23:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.921 23:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.182 [2024-11-02 23:50:04.106468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:10.182 BaseBdev4 00:10:10.182 [2024-11-02 23:50:04.106792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:10.182 [2024-11-02 23:50:04.106811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.182 [2024-11-02 23:50:04.107115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:10.182 [2024-11-02 23:50:04.107258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:10.182 [2024-11-02 23:50:04.107272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:10.182 [2024-11-02 23:50:04.107405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.182 [ 00:10:10.182 { 00:10:10.182 "name": "BaseBdev4", 00:10:10.182 "aliases": [ 00:10:10.182 "ca85da88-e7c2-4086-ace2-0f36ba5d7c76" 00:10:10.182 ], 00:10:10.182 "product_name": "Malloc disk", 00:10:10.182 "block_size": 512, 00:10:10.182 "num_blocks": 65536, 00:10:10.182 "uuid": "ca85da88-e7c2-4086-ace2-0f36ba5d7c76", 00:10:10.182 "assigned_rate_limits": { 00:10:10.182 "rw_ios_per_sec": 0, 00:10:10.182 "rw_mbytes_per_sec": 0, 00:10:10.182 "r_mbytes_per_sec": 0, 00:10:10.182 "w_mbytes_per_sec": 0 00:10:10.182 }, 00:10:10.182 "claimed": true, 00:10:10.182 "claim_type": "exclusive_write", 00:10:10.182 "zoned": false, 00:10:10.182 "supported_io_types": { 00:10:10.182 "read": true, 00:10:10.182 "write": true, 00:10:10.182 "unmap": true, 00:10:10.182 "flush": true, 00:10:10.182 "reset": true, 00:10:10.182 "nvme_admin": false, 00:10:10.182 "nvme_io": false, 00:10:10.182 "nvme_io_md": false, 00:10:10.182 "write_zeroes": true, 00:10:10.182 "zcopy": true, 00:10:10.182 "get_zone_info": false, 00:10:10.182 "zone_management": false, 00:10:10.182 "zone_append": false, 00:10:10.182 "compare": false, 00:10:10.182 "compare_and_write": false, 00:10:10.182 "abort": true, 00:10:10.182 "seek_hole": false, 00:10:10.182 "seek_data": false, 00:10:10.182 "copy": true, 00:10:10.182 "nvme_iov_md": false 00:10:10.182 }, 00:10:10.182 "memory_domains": [ 00:10:10.182 { 00:10:10.182 "dma_device_id": "system", 00:10:10.182 "dma_device_type": 1 00:10:10.182 }, 00:10:10.182 { 00:10:10.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.182 "dma_device_type": 2 00:10:10.182 } 00:10:10.182 ], 00:10:10.182 "driver_specific": {} 00:10:10.182 } 00:10:10.182 ] 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.182 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.182 "name": "Existed_Raid", 00:10:10.182 "uuid": "146922ee-418b-4aeb-91c3-677a676223d2", 00:10:10.182 "strip_size_kb": 0, 00:10:10.182 "state": "online", 00:10:10.183 "raid_level": "raid1", 00:10:10.183 "superblock": true, 00:10:10.183 "num_base_bdevs": 4, 00:10:10.183 "num_base_bdevs_discovered": 4, 00:10:10.183 "num_base_bdevs_operational": 4, 00:10:10.183 "base_bdevs_list": [ 00:10:10.183 { 00:10:10.183 "name": "BaseBdev1", 00:10:10.183 "uuid": "013d78fe-5dc5-4191-ac32-1fead1a869ec", 00:10:10.183 "is_configured": true, 00:10:10.183 "data_offset": 2048, 00:10:10.183 "data_size": 63488 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "name": "BaseBdev2", 00:10:10.183 "uuid": "7253e576-ee0b-4a1b-8195-eeebfc2e2a9e", 00:10:10.183 "is_configured": true, 00:10:10.183 "data_offset": 2048, 00:10:10.183 "data_size": 63488 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "name": "BaseBdev3", 00:10:10.183 "uuid": "3800907b-0de4-4b8e-859e-19061a74caab", 00:10:10.183 "is_configured": true, 00:10:10.183 "data_offset": 2048, 00:10:10.183 "data_size": 63488 00:10:10.183 }, 00:10:10.183 { 00:10:10.183 "name": "BaseBdev4", 00:10:10.183 "uuid": "ca85da88-e7c2-4086-ace2-0f36ba5d7c76", 00:10:10.183 "is_configured": true, 00:10:10.183 "data_offset": 2048, 00:10:10.183 "data_size": 63488 00:10:10.183 } 00:10:10.183 ] 00:10:10.183 }' 00:10:10.183 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.183 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.752 [2024-11-02 23:50:04.593995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.752 "name": "Existed_Raid", 00:10:10.752 "aliases": [ 00:10:10.752 "146922ee-418b-4aeb-91c3-677a676223d2" 00:10:10.752 ], 00:10:10.752 "product_name": "Raid Volume", 00:10:10.752 "block_size": 512, 00:10:10.752 "num_blocks": 63488, 00:10:10.752 "uuid": "146922ee-418b-4aeb-91c3-677a676223d2", 00:10:10.752 "assigned_rate_limits": { 00:10:10.752 "rw_ios_per_sec": 0, 00:10:10.752 "rw_mbytes_per_sec": 0, 00:10:10.752 "r_mbytes_per_sec": 0, 00:10:10.752 "w_mbytes_per_sec": 0 00:10:10.752 }, 00:10:10.752 "claimed": false, 00:10:10.752 "zoned": false, 00:10:10.752 "supported_io_types": { 00:10:10.752 "read": true, 00:10:10.752 "write": true, 00:10:10.752 "unmap": false, 00:10:10.752 "flush": false, 00:10:10.752 "reset": true, 00:10:10.752 "nvme_admin": false, 00:10:10.752 "nvme_io": false, 00:10:10.752 "nvme_io_md": false, 00:10:10.752 "write_zeroes": true, 00:10:10.752 "zcopy": false, 00:10:10.752 "get_zone_info": false, 00:10:10.752 "zone_management": false, 00:10:10.752 "zone_append": false, 00:10:10.752 "compare": false, 00:10:10.752 "compare_and_write": false, 00:10:10.752 "abort": false, 00:10:10.752 "seek_hole": false, 00:10:10.752 "seek_data": false, 00:10:10.752 "copy": false, 00:10:10.752 "nvme_iov_md": false 00:10:10.752 }, 00:10:10.752 "memory_domains": [ 00:10:10.752 { 00:10:10.752 "dma_device_id": "system", 00:10:10.752 "dma_device_type": 1 00:10:10.752 }, 00:10:10.752 { 00:10:10.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.752 "dma_device_type": 2 00:10:10.752 }, 00:10:10.752 { 00:10:10.752 "dma_device_id": "system", 00:10:10.752 "dma_device_type": 1 00:10:10.752 }, 00:10:10.752 { 00:10:10.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.752 "dma_device_type": 2 00:10:10.752 }, 00:10:10.752 { 00:10:10.752 "dma_device_id": "system", 00:10:10.752 "dma_device_type": 1 00:10:10.752 }, 00:10:10.752 { 00:10:10.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.752 "dma_device_type": 2 00:10:10.752 }, 00:10:10.752 { 00:10:10.752 "dma_device_id": "system", 00:10:10.752 "dma_device_type": 1 00:10:10.752 }, 00:10:10.752 { 00:10:10.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.752 "dma_device_type": 2 00:10:10.752 } 00:10:10.752 ], 00:10:10.752 "driver_specific": { 00:10:10.752 "raid": { 00:10:10.752 "uuid": "146922ee-418b-4aeb-91c3-677a676223d2", 00:10:10.752 "strip_size_kb": 0, 00:10:10.752 "state": "online", 00:10:10.752 "raid_level": "raid1", 00:10:10.752 "superblock": true, 00:10:10.752 "num_base_bdevs": 4, 00:10:10.752 "num_base_bdevs_discovered": 4, 00:10:10.752 "num_base_bdevs_operational": 4, 00:10:10.752 "base_bdevs_list": [ 00:10:10.752 { 00:10:10.752 "name": "BaseBdev1", 00:10:10.752 "uuid": "013d78fe-5dc5-4191-ac32-1fead1a869ec", 00:10:10.752 "is_configured": true, 00:10:10.752 "data_offset": 2048, 00:10:10.752 "data_size": 63488 00:10:10.752 }, 00:10:10.752 { 00:10:10.752 "name": "BaseBdev2", 00:10:10.752 "uuid": "7253e576-ee0b-4a1b-8195-eeebfc2e2a9e", 00:10:10.752 "is_configured": true, 00:10:10.752 "data_offset": 2048, 00:10:10.752 "data_size": 63488 00:10:10.752 }, 00:10:10.752 { 00:10:10.752 "name": "BaseBdev3", 00:10:10.752 "uuid": "3800907b-0de4-4b8e-859e-19061a74caab", 00:10:10.752 "is_configured": true, 00:10:10.752 "data_offset": 2048, 00:10:10.752 "data_size": 63488 00:10:10.752 }, 00:10:10.752 { 00:10:10.752 "name": "BaseBdev4", 00:10:10.752 "uuid": "ca85da88-e7c2-4086-ace2-0f36ba5d7c76", 00:10:10.752 "is_configured": true, 00:10:10.752 "data_offset": 2048, 00:10:10.752 "data_size": 63488 00:10:10.752 } 00:10:10.752 ] 00:10:10.752 } 00:10:10.752 } 00:10:10.752 }' 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:10.752 BaseBdev2 00:10:10.752 BaseBdev3 00:10:10.752 BaseBdev4' 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.752 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.753 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.012 [2024-11-02 23:50:04.897258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.012 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.012 "name": "Existed_Raid", 00:10:11.012 "uuid": "146922ee-418b-4aeb-91c3-677a676223d2", 00:10:11.012 "strip_size_kb": 0, 00:10:11.012 "state": "online", 00:10:11.012 "raid_level": "raid1", 00:10:11.012 "superblock": true, 00:10:11.012 "num_base_bdevs": 4, 00:10:11.012 "num_base_bdevs_discovered": 3, 00:10:11.012 "num_base_bdevs_operational": 3, 00:10:11.012 "base_bdevs_list": [ 00:10:11.012 { 00:10:11.012 "name": null, 00:10:11.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.012 "is_configured": false, 00:10:11.012 "data_offset": 0, 00:10:11.012 "data_size": 63488 00:10:11.012 }, 00:10:11.012 { 00:10:11.012 "name": "BaseBdev2", 00:10:11.012 "uuid": "7253e576-ee0b-4a1b-8195-eeebfc2e2a9e", 00:10:11.012 "is_configured": true, 00:10:11.012 "data_offset": 2048, 00:10:11.012 "data_size": 63488 00:10:11.012 }, 00:10:11.012 { 00:10:11.012 "name": "BaseBdev3", 00:10:11.013 "uuid": "3800907b-0de4-4b8e-859e-19061a74caab", 00:10:11.013 "is_configured": true, 00:10:11.013 "data_offset": 2048, 00:10:11.013 "data_size": 63488 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "name": "BaseBdev4", 00:10:11.013 "uuid": "ca85da88-e7c2-4086-ace2-0f36ba5d7c76", 00:10:11.013 "is_configured": true, 00:10:11.013 "data_offset": 2048, 00:10:11.013 "data_size": 63488 00:10:11.013 } 00:10:11.013 ] 00:10:11.013 }' 00:10:11.013 23:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.013 23:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.272 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:11.272 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.272 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.272 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.272 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.272 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.532 [2024-11-02 23:50:05.407777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.532 [2024-11-02 23:50:05.478967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.532 [2024-11-02 23:50:05.550124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:11.532 [2024-11-02 23:50:05.550224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.532 [2024-11-02 23:50:05.561775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.532 [2024-11-02 23:50:05.561829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.532 [2024-11-02 23:50:05.561842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.532 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.533 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.793 BaseBdev2 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.793 [ 00:10:11.793 { 00:10:11.793 "name": "BaseBdev2", 00:10:11.793 "aliases": [ 00:10:11.793 "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500" 00:10:11.793 ], 00:10:11.793 "product_name": "Malloc disk", 00:10:11.793 "block_size": 512, 00:10:11.793 "num_blocks": 65536, 00:10:11.793 "uuid": "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500", 00:10:11.793 "assigned_rate_limits": { 00:10:11.793 "rw_ios_per_sec": 0, 00:10:11.793 "rw_mbytes_per_sec": 0, 00:10:11.793 "r_mbytes_per_sec": 0, 00:10:11.793 "w_mbytes_per_sec": 0 00:10:11.793 }, 00:10:11.793 "claimed": false, 00:10:11.793 "zoned": false, 00:10:11.793 "supported_io_types": { 00:10:11.793 "read": true, 00:10:11.793 "write": true, 00:10:11.793 "unmap": true, 00:10:11.793 "flush": true, 00:10:11.793 "reset": true, 00:10:11.793 "nvme_admin": false, 00:10:11.793 "nvme_io": false, 00:10:11.793 "nvme_io_md": false, 00:10:11.793 "write_zeroes": true, 00:10:11.793 "zcopy": true, 00:10:11.793 "get_zone_info": false, 00:10:11.793 "zone_management": false, 00:10:11.793 "zone_append": false, 00:10:11.793 "compare": false, 00:10:11.793 "compare_and_write": false, 00:10:11.793 "abort": true, 00:10:11.793 "seek_hole": false, 00:10:11.793 "seek_data": false, 00:10:11.793 "copy": true, 00:10:11.793 "nvme_iov_md": false 00:10:11.793 }, 00:10:11.793 "memory_domains": [ 00:10:11.793 { 00:10:11.793 "dma_device_id": "system", 00:10:11.793 "dma_device_type": 1 00:10:11.793 }, 00:10:11.793 { 00:10:11.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.793 "dma_device_type": 2 00:10:11.793 } 00:10:11.793 ], 00:10:11.793 "driver_specific": {} 00:10:11.793 } 00:10:11.793 ] 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.793 BaseBdev3 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.793 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.793 [ 00:10:11.793 { 00:10:11.793 "name": "BaseBdev3", 00:10:11.793 "aliases": [ 00:10:11.793 "1a855df7-1205-4637-9fa8-09beb011cf72" 00:10:11.793 ], 00:10:11.793 "product_name": "Malloc disk", 00:10:11.793 "block_size": 512, 00:10:11.793 "num_blocks": 65536, 00:10:11.793 "uuid": "1a855df7-1205-4637-9fa8-09beb011cf72", 00:10:11.793 "assigned_rate_limits": { 00:10:11.793 "rw_ios_per_sec": 0, 00:10:11.793 "rw_mbytes_per_sec": 0, 00:10:11.793 "r_mbytes_per_sec": 0, 00:10:11.793 "w_mbytes_per_sec": 0 00:10:11.793 }, 00:10:11.793 "claimed": false, 00:10:11.793 "zoned": false, 00:10:11.793 "supported_io_types": { 00:10:11.793 "read": true, 00:10:11.793 "write": true, 00:10:11.793 "unmap": true, 00:10:11.793 "flush": true, 00:10:11.793 "reset": true, 00:10:11.793 "nvme_admin": false, 00:10:11.793 "nvme_io": false, 00:10:11.793 "nvme_io_md": false, 00:10:11.793 "write_zeroes": true, 00:10:11.793 "zcopy": true, 00:10:11.793 "get_zone_info": false, 00:10:11.793 "zone_management": false, 00:10:11.793 "zone_append": false, 00:10:11.793 "compare": false, 00:10:11.793 "compare_and_write": false, 00:10:11.793 "abort": true, 00:10:11.793 "seek_hole": false, 00:10:11.793 "seek_data": false, 00:10:11.793 "copy": true, 00:10:11.793 "nvme_iov_md": false 00:10:11.793 }, 00:10:11.793 "memory_domains": [ 00:10:11.793 { 00:10:11.793 "dma_device_id": "system", 00:10:11.793 "dma_device_type": 1 00:10:11.793 }, 00:10:11.793 { 00:10:11.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.793 "dma_device_type": 2 00:10:11.793 } 00:10:11.794 ], 00:10:11.794 "driver_specific": {} 00:10:11.794 } 00:10:11.794 ] 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.794 BaseBdev4 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.794 [ 00:10:11.794 { 00:10:11.794 "name": "BaseBdev4", 00:10:11.794 "aliases": [ 00:10:11.794 "bf3cde8e-8746-4aa0-8c9e-e62749e163e1" 00:10:11.794 ], 00:10:11.794 "product_name": "Malloc disk", 00:10:11.794 "block_size": 512, 00:10:11.794 "num_blocks": 65536, 00:10:11.794 "uuid": "bf3cde8e-8746-4aa0-8c9e-e62749e163e1", 00:10:11.794 "assigned_rate_limits": { 00:10:11.794 "rw_ios_per_sec": 0, 00:10:11.794 "rw_mbytes_per_sec": 0, 00:10:11.794 "r_mbytes_per_sec": 0, 00:10:11.794 "w_mbytes_per_sec": 0 00:10:11.794 }, 00:10:11.794 "claimed": false, 00:10:11.794 "zoned": false, 00:10:11.794 "supported_io_types": { 00:10:11.794 "read": true, 00:10:11.794 "write": true, 00:10:11.794 "unmap": true, 00:10:11.794 "flush": true, 00:10:11.794 "reset": true, 00:10:11.794 "nvme_admin": false, 00:10:11.794 "nvme_io": false, 00:10:11.794 "nvme_io_md": false, 00:10:11.794 "write_zeroes": true, 00:10:11.794 "zcopy": true, 00:10:11.794 "get_zone_info": false, 00:10:11.794 "zone_management": false, 00:10:11.794 "zone_append": false, 00:10:11.794 "compare": false, 00:10:11.794 "compare_and_write": false, 00:10:11.794 "abort": true, 00:10:11.794 "seek_hole": false, 00:10:11.794 "seek_data": false, 00:10:11.794 "copy": true, 00:10:11.794 "nvme_iov_md": false 00:10:11.794 }, 00:10:11.794 "memory_domains": [ 00:10:11.794 { 00:10:11.794 "dma_device_id": "system", 00:10:11.794 "dma_device_type": 1 00:10:11.794 }, 00:10:11.794 { 00:10:11.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.794 "dma_device_type": 2 00:10:11.794 } 00:10:11.794 ], 00:10:11.794 "driver_specific": {} 00:10:11.794 } 00:10:11.794 ] 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.794 [2024-11-02 23:50:05.779007] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.794 [2024-11-02 23:50:05.779101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.794 [2024-11-02 23:50:05.779152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.794 [2024-11-02 23:50:05.781246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.794 [2024-11-02 23:50:05.781341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.794 "name": "Existed_Raid", 00:10:11.794 "uuid": "530b6afe-3989-4c7a-9675-fc5dbfd70d05", 00:10:11.794 "strip_size_kb": 0, 00:10:11.794 "state": "configuring", 00:10:11.794 "raid_level": "raid1", 00:10:11.794 "superblock": true, 00:10:11.794 "num_base_bdevs": 4, 00:10:11.794 "num_base_bdevs_discovered": 3, 00:10:11.794 "num_base_bdevs_operational": 4, 00:10:11.794 "base_bdevs_list": [ 00:10:11.794 { 00:10:11.794 "name": "BaseBdev1", 00:10:11.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.794 "is_configured": false, 00:10:11.794 "data_offset": 0, 00:10:11.794 "data_size": 0 00:10:11.794 }, 00:10:11.794 { 00:10:11.794 "name": "BaseBdev2", 00:10:11.794 "uuid": "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500", 00:10:11.794 "is_configured": true, 00:10:11.794 "data_offset": 2048, 00:10:11.794 "data_size": 63488 00:10:11.794 }, 00:10:11.794 { 00:10:11.794 "name": "BaseBdev3", 00:10:11.794 "uuid": "1a855df7-1205-4637-9fa8-09beb011cf72", 00:10:11.794 "is_configured": true, 00:10:11.794 "data_offset": 2048, 00:10:11.794 "data_size": 63488 00:10:11.794 }, 00:10:11.794 { 00:10:11.794 "name": "BaseBdev4", 00:10:11.794 "uuid": "bf3cde8e-8746-4aa0-8c9e-e62749e163e1", 00:10:11.794 "is_configured": true, 00:10:11.794 "data_offset": 2048, 00:10:11.794 "data_size": 63488 00:10:11.794 } 00:10:11.794 ] 00:10:11.794 }' 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.794 23:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.363 [2024-11-02 23:50:06.202476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.363 "name": "Existed_Raid", 00:10:12.363 "uuid": "530b6afe-3989-4c7a-9675-fc5dbfd70d05", 00:10:12.363 "strip_size_kb": 0, 00:10:12.363 "state": "configuring", 00:10:12.363 "raid_level": "raid1", 00:10:12.363 "superblock": true, 00:10:12.363 "num_base_bdevs": 4, 00:10:12.363 "num_base_bdevs_discovered": 2, 00:10:12.363 "num_base_bdevs_operational": 4, 00:10:12.363 "base_bdevs_list": [ 00:10:12.363 { 00:10:12.363 "name": "BaseBdev1", 00:10:12.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.363 "is_configured": false, 00:10:12.363 "data_offset": 0, 00:10:12.363 "data_size": 0 00:10:12.363 }, 00:10:12.363 { 00:10:12.363 "name": null, 00:10:12.363 "uuid": "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500", 00:10:12.363 "is_configured": false, 00:10:12.363 "data_offset": 0, 00:10:12.363 "data_size": 63488 00:10:12.363 }, 00:10:12.363 { 00:10:12.363 "name": "BaseBdev3", 00:10:12.363 "uuid": "1a855df7-1205-4637-9fa8-09beb011cf72", 00:10:12.363 "is_configured": true, 00:10:12.363 "data_offset": 2048, 00:10:12.363 "data_size": 63488 00:10:12.363 }, 00:10:12.363 { 00:10:12.363 "name": "BaseBdev4", 00:10:12.363 "uuid": "bf3cde8e-8746-4aa0-8c9e-e62749e163e1", 00:10:12.363 "is_configured": true, 00:10:12.363 "data_offset": 2048, 00:10:12.363 "data_size": 63488 00:10:12.363 } 00:10:12.363 ] 00:10:12.363 }' 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.363 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.623 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.623 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.623 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.623 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.623 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.623 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:12.623 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.623 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.623 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.883 [2024-11-02 23:50:06.720589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.883 BaseBdev1 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.883 [ 00:10:12.883 { 00:10:12.883 "name": "BaseBdev1", 00:10:12.883 "aliases": [ 00:10:12.883 "42b80d4b-af00-40d9-a2ff-541a4b1a8c13" 00:10:12.883 ], 00:10:12.883 "product_name": "Malloc disk", 00:10:12.883 "block_size": 512, 00:10:12.883 "num_blocks": 65536, 00:10:12.883 "uuid": "42b80d4b-af00-40d9-a2ff-541a4b1a8c13", 00:10:12.883 "assigned_rate_limits": { 00:10:12.883 "rw_ios_per_sec": 0, 00:10:12.883 "rw_mbytes_per_sec": 0, 00:10:12.883 "r_mbytes_per_sec": 0, 00:10:12.883 "w_mbytes_per_sec": 0 00:10:12.883 }, 00:10:12.883 "claimed": true, 00:10:12.883 "claim_type": "exclusive_write", 00:10:12.883 "zoned": false, 00:10:12.883 "supported_io_types": { 00:10:12.883 "read": true, 00:10:12.883 "write": true, 00:10:12.883 "unmap": true, 00:10:12.883 "flush": true, 00:10:12.883 "reset": true, 00:10:12.883 "nvme_admin": false, 00:10:12.883 "nvme_io": false, 00:10:12.883 "nvme_io_md": false, 00:10:12.883 "write_zeroes": true, 00:10:12.883 "zcopy": true, 00:10:12.883 "get_zone_info": false, 00:10:12.883 "zone_management": false, 00:10:12.883 "zone_append": false, 00:10:12.883 "compare": false, 00:10:12.883 "compare_and_write": false, 00:10:12.883 "abort": true, 00:10:12.883 "seek_hole": false, 00:10:12.883 "seek_data": false, 00:10:12.883 "copy": true, 00:10:12.883 "nvme_iov_md": false 00:10:12.883 }, 00:10:12.883 "memory_domains": [ 00:10:12.883 { 00:10:12.883 "dma_device_id": "system", 00:10:12.883 "dma_device_type": 1 00:10:12.883 }, 00:10:12.883 { 00:10:12.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.883 "dma_device_type": 2 00:10:12.883 } 00:10:12.883 ], 00:10:12.883 "driver_specific": {} 00:10:12.883 } 00:10:12.883 ] 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.883 "name": "Existed_Raid", 00:10:12.883 "uuid": "530b6afe-3989-4c7a-9675-fc5dbfd70d05", 00:10:12.883 "strip_size_kb": 0, 00:10:12.883 "state": "configuring", 00:10:12.883 "raid_level": "raid1", 00:10:12.883 "superblock": true, 00:10:12.883 "num_base_bdevs": 4, 00:10:12.883 "num_base_bdevs_discovered": 3, 00:10:12.883 "num_base_bdevs_operational": 4, 00:10:12.883 "base_bdevs_list": [ 00:10:12.883 { 00:10:12.883 "name": "BaseBdev1", 00:10:12.883 "uuid": "42b80d4b-af00-40d9-a2ff-541a4b1a8c13", 00:10:12.883 "is_configured": true, 00:10:12.883 "data_offset": 2048, 00:10:12.883 "data_size": 63488 00:10:12.883 }, 00:10:12.883 { 00:10:12.883 "name": null, 00:10:12.883 "uuid": "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500", 00:10:12.883 "is_configured": false, 00:10:12.883 "data_offset": 0, 00:10:12.883 "data_size": 63488 00:10:12.883 }, 00:10:12.883 { 00:10:12.883 "name": "BaseBdev3", 00:10:12.883 "uuid": "1a855df7-1205-4637-9fa8-09beb011cf72", 00:10:12.883 "is_configured": true, 00:10:12.883 "data_offset": 2048, 00:10:12.883 "data_size": 63488 00:10:12.883 }, 00:10:12.883 { 00:10:12.883 "name": "BaseBdev4", 00:10:12.883 "uuid": "bf3cde8e-8746-4aa0-8c9e-e62749e163e1", 00:10:12.883 "is_configured": true, 00:10:12.883 "data_offset": 2048, 00:10:12.883 "data_size": 63488 00:10:12.883 } 00:10:12.883 ] 00:10:12.883 }' 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.883 23:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.458 [2024-11-02 23:50:07.295728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.458 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.459 "name": "Existed_Raid", 00:10:13.459 "uuid": "530b6afe-3989-4c7a-9675-fc5dbfd70d05", 00:10:13.459 "strip_size_kb": 0, 00:10:13.459 "state": "configuring", 00:10:13.459 "raid_level": "raid1", 00:10:13.459 "superblock": true, 00:10:13.459 "num_base_bdevs": 4, 00:10:13.459 "num_base_bdevs_discovered": 2, 00:10:13.459 "num_base_bdevs_operational": 4, 00:10:13.459 "base_bdevs_list": [ 00:10:13.459 { 00:10:13.459 "name": "BaseBdev1", 00:10:13.459 "uuid": "42b80d4b-af00-40d9-a2ff-541a4b1a8c13", 00:10:13.459 "is_configured": true, 00:10:13.459 "data_offset": 2048, 00:10:13.459 "data_size": 63488 00:10:13.459 }, 00:10:13.459 { 00:10:13.459 "name": null, 00:10:13.459 "uuid": "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500", 00:10:13.459 "is_configured": false, 00:10:13.459 "data_offset": 0, 00:10:13.459 "data_size": 63488 00:10:13.459 }, 00:10:13.459 { 00:10:13.459 "name": null, 00:10:13.459 "uuid": "1a855df7-1205-4637-9fa8-09beb011cf72", 00:10:13.459 "is_configured": false, 00:10:13.459 "data_offset": 0, 00:10:13.459 "data_size": 63488 00:10:13.459 }, 00:10:13.459 { 00:10:13.459 "name": "BaseBdev4", 00:10:13.459 "uuid": "bf3cde8e-8746-4aa0-8c9e-e62749e163e1", 00:10:13.459 "is_configured": true, 00:10:13.459 "data_offset": 2048, 00:10:13.459 "data_size": 63488 00:10:13.459 } 00:10:13.459 ] 00:10:13.459 }' 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.459 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.718 [2024-11-02 23:50:07.746955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.718 "name": "Existed_Raid", 00:10:13.718 "uuid": "530b6afe-3989-4c7a-9675-fc5dbfd70d05", 00:10:13.718 "strip_size_kb": 0, 00:10:13.718 "state": "configuring", 00:10:13.718 "raid_level": "raid1", 00:10:13.718 "superblock": true, 00:10:13.718 "num_base_bdevs": 4, 00:10:13.718 "num_base_bdevs_discovered": 3, 00:10:13.718 "num_base_bdevs_operational": 4, 00:10:13.718 "base_bdevs_list": [ 00:10:13.718 { 00:10:13.718 "name": "BaseBdev1", 00:10:13.718 "uuid": "42b80d4b-af00-40d9-a2ff-541a4b1a8c13", 00:10:13.718 "is_configured": true, 00:10:13.718 "data_offset": 2048, 00:10:13.718 "data_size": 63488 00:10:13.718 }, 00:10:13.718 { 00:10:13.718 "name": null, 00:10:13.718 "uuid": "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500", 00:10:13.718 "is_configured": false, 00:10:13.718 "data_offset": 0, 00:10:13.718 "data_size": 63488 00:10:13.718 }, 00:10:13.718 { 00:10:13.718 "name": "BaseBdev3", 00:10:13.718 "uuid": "1a855df7-1205-4637-9fa8-09beb011cf72", 00:10:13.718 "is_configured": true, 00:10:13.718 "data_offset": 2048, 00:10:13.718 "data_size": 63488 00:10:13.718 }, 00:10:13.718 { 00:10:13.718 "name": "BaseBdev4", 00:10:13.718 "uuid": "bf3cde8e-8746-4aa0-8c9e-e62749e163e1", 00:10:13.718 "is_configured": true, 00:10:13.718 "data_offset": 2048, 00:10:13.718 "data_size": 63488 00:10:13.718 } 00:10:13.718 ] 00:10:13.718 }' 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.718 23:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.286 [2024-11-02 23:50:08.262212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.286 "name": "Existed_Raid", 00:10:14.286 "uuid": "530b6afe-3989-4c7a-9675-fc5dbfd70d05", 00:10:14.286 "strip_size_kb": 0, 00:10:14.286 "state": "configuring", 00:10:14.286 "raid_level": "raid1", 00:10:14.286 "superblock": true, 00:10:14.286 "num_base_bdevs": 4, 00:10:14.286 "num_base_bdevs_discovered": 2, 00:10:14.286 "num_base_bdevs_operational": 4, 00:10:14.286 "base_bdevs_list": [ 00:10:14.286 { 00:10:14.286 "name": null, 00:10:14.286 "uuid": "42b80d4b-af00-40d9-a2ff-541a4b1a8c13", 00:10:14.286 "is_configured": false, 00:10:14.286 "data_offset": 0, 00:10:14.286 "data_size": 63488 00:10:14.286 }, 00:10:14.286 { 00:10:14.286 "name": null, 00:10:14.286 "uuid": "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500", 00:10:14.286 "is_configured": false, 00:10:14.286 "data_offset": 0, 00:10:14.286 "data_size": 63488 00:10:14.286 }, 00:10:14.286 { 00:10:14.286 "name": "BaseBdev3", 00:10:14.286 "uuid": "1a855df7-1205-4637-9fa8-09beb011cf72", 00:10:14.286 "is_configured": true, 00:10:14.286 "data_offset": 2048, 00:10:14.286 "data_size": 63488 00:10:14.286 }, 00:10:14.286 { 00:10:14.286 "name": "BaseBdev4", 00:10:14.286 "uuid": "bf3cde8e-8746-4aa0-8c9e-e62749e163e1", 00:10:14.286 "is_configured": true, 00:10:14.286 "data_offset": 2048, 00:10:14.286 "data_size": 63488 00:10:14.286 } 00:10:14.286 ] 00:10:14.286 }' 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.286 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.854 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.854 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.854 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.854 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.854 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.854 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.855 [2024-11-02 23:50:08.811855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.855 "name": "Existed_Raid", 00:10:14.855 "uuid": "530b6afe-3989-4c7a-9675-fc5dbfd70d05", 00:10:14.855 "strip_size_kb": 0, 00:10:14.855 "state": "configuring", 00:10:14.855 "raid_level": "raid1", 00:10:14.855 "superblock": true, 00:10:14.855 "num_base_bdevs": 4, 00:10:14.855 "num_base_bdevs_discovered": 3, 00:10:14.855 "num_base_bdevs_operational": 4, 00:10:14.855 "base_bdevs_list": [ 00:10:14.855 { 00:10:14.855 "name": null, 00:10:14.855 "uuid": "42b80d4b-af00-40d9-a2ff-541a4b1a8c13", 00:10:14.855 "is_configured": false, 00:10:14.855 "data_offset": 0, 00:10:14.855 "data_size": 63488 00:10:14.855 }, 00:10:14.855 { 00:10:14.855 "name": "BaseBdev2", 00:10:14.855 "uuid": "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500", 00:10:14.855 "is_configured": true, 00:10:14.855 "data_offset": 2048, 00:10:14.855 "data_size": 63488 00:10:14.855 }, 00:10:14.855 { 00:10:14.855 "name": "BaseBdev3", 00:10:14.855 "uuid": "1a855df7-1205-4637-9fa8-09beb011cf72", 00:10:14.855 "is_configured": true, 00:10:14.855 "data_offset": 2048, 00:10:14.855 "data_size": 63488 00:10:14.855 }, 00:10:14.855 { 00:10:14.855 "name": "BaseBdev4", 00:10:14.855 "uuid": "bf3cde8e-8746-4aa0-8c9e-e62749e163e1", 00:10:14.855 "is_configured": true, 00:10:14.855 "data_offset": 2048, 00:10:14.855 "data_size": 63488 00:10:14.855 } 00:10:14.855 ] 00:10:14.855 }' 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.855 23:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 42b80d4b-af00-40d9-a2ff-541a4b1a8c13 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 [2024-11-02 23:50:09.349726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:15.435 [2024-11-02 23:50:09.349924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:15.435 [2024-11-02 23:50:09.349941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.435 [2024-11-02 23:50:09.350176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:15.435 [2024-11-02 23:50:09.350302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:15.435 [2024-11-02 23:50:09.350350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:15.435 NewBaseBdev 00:10:15.435 [2024-11-02 23:50:09.350447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 [ 00:10:15.435 { 00:10:15.435 "name": "NewBaseBdev", 00:10:15.435 "aliases": [ 00:10:15.435 "42b80d4b-af00-40d9-a2ff-541a4b1a8c13" 00:10:15.435 ], 00:10:15.435 "product_name": "Malloc disk", 00:10:15.435 "block_size": 512, 00:10:15.435 "num_blocks": 65536, 00:10:15.435 "uuid": "42b80d4b-af00-40d9-a2ff-541a4b1a8c13", 00:10:15.435 "assigned_rate_limits": { 00:10:15.435 "rw_ios_per_sec": 0, 00:10:15.435 "rw_mbytes_per_sec": 0, 00:10:15.435 "r_mbytes_per_sec": 0, 00:10:15.435 "w_mbytes_per_sec": 0 00:10:15.435 }, 00:10:15.435 "claimed": true, 00:10:15.435 "claim_type": "exclusive_write", 00:10:15.435 "zoned": false, 00:10:15.435 "supported_io_types": { 00:10:15.435 "read": true, 00:10:15.435 "write": true, 00:10:15.435 "unmap": true, 00:10:15.435 "flush": true, 00:10:15.435 "reset": true, 00:10:15.435 "nvme_admin": false, 00:10:15.435 "nvme_io": false, 00:10:15.435 "nvme_io_md": false, 00:10:15.435 "write_zeroes": true, 00:10:15.435 "zcopy": true, 00:10:15.435 "get_zone_info": false, 00:10:15.435 "zone_management": false, 00:10:15.435 "zone_append": false, 00:10:15.435 "compare": false, 00:10:15.435 "compare_and_write": false, 00:10:15.435 "abort": true, 00:10:15.435 "seek_hole": false, 00:10:15.435 "seek_data": false, 00:10:15.435 "copy": true, 00:10:15.435 "nvme_iov_md": false 00:10:15.435 }, 00:10:15.435 "memory_domains": [ 00:10:15.435 { 00:10:15.435 "dma_device_id": "system", 00:10:15.435 "dma_device_type": 1 00:10:15.435 }, 00:10:15.435 { 00:10:15.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.435 "dma_device_type": 2 00:10:15.435 } 00:10:15.435 ], 00:10:15.435 "driver_specific": {} 00:10:15.435 } 00:10:15.435 ] 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.435 "name": "Existed_Raid", 00:10:15.435 "uuid": "530b6afe-3989-4c7a-9675-fc5dbfd70d05", 00:10:15.435 "strip_size_kb": 0, 00:10:15.435 "state": "online", 00:10:15.435 "raid_level": "raid1", 00:10:15.435 "superblock": true, 00:10:15.435 "num_base_bdevs": 4, 00:10:15.435 "num_base_bdevs_discovered": 4, 00:10:15.435 "num_base_bdevs_operational": 4, 00:10:15.435 "base_bdevs_list": [ 00:10:15.435 { 00:10:15.435 "name": "NewBaseBdev", 00:10:15.435 "uuid": "42b80d4b-af00-40d9-a2ff-541a4b1a8c13", 00:10:15.435 "is_configured": true, 00:10:15.435 "data_offset": 2048, 00:10:15.435 "data_size": 63488 00:10:15.435 }, 00:10:15.435 { 00:10:15.435 "name": "BaseBdev2", 00:10:15.435 "uuid": "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500", 00:10:15.435 "is_configured": true, 00:10:15.435 "data_offset": 2048, 00:10:15.435 "data_size": 63488 00:10:15.435 }, 00:10:15.435 { 00:10:15.435 "name": "BaseBdev3", 00:10:15.435 "uuid": "1a855df7-1205-4637-9fa8-09beb011cf72", 00:10:15.435 "is_configured": true, 00:10:15.435 "data_offset": 2048, 00:10:15.435 "data_size": 63488 00:10:15.435 }, 00:10:15.435 { 00:10:15.435 "name": "BaseBdev4", 00:10:15.435 "uuid": "bf3cde8e-8746-4aa0-8c9e-e62749e163e1", 00:10:15.435 "is_configured": true, 00:10:15.435 "data_offset": 2048, 00:10:15.435 "data_size": 63488 00:10:15.435 } 00:10:15.435 ] 00:10:15.435 }' 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.435 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.006 [2024-11-02 23:50:09.845257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.006 "name": "Existed_Raid", 00:10:16.006 "aliases": [ 00:10:16.006 "530b6afe-3989-4c7a-9675-fc5dbfd70d05" 00:10:16.006 ], 00:10:16.006 "product_name": "Raid Volume", 00:10:16.006 "block_size": 512, 00:10:16.006 "num_blocks": 63488, 00:10:16.006 "uuid": "530b6afe-3989-4c7a-9675-fc5dbfd70d05", 00:10:16.006 "assigned_rate_limits": { 00:10:16.006 "rw_ios_per_sec": 0, 00:10:16.006 "rw_mbytes_per_sec": 0, 00:10:16.006 "r_mbytes_per_sec": 0, 00:10:16.006 "w_mbytes_per_sec": 0 00:10:16.006 }, 00:10:16.006 "claimed": false, 00:10:16.006 "zoned": false, 00:10:16.006 "supported_io_types": { 00:10:16.006 "read": true, 00:10:16.006 "write": true, 00:10:16.006 "unmap": false, 00:10:16.006 "flush": false, 00:10:16.006 "reset": true, 00:10:16.006 "nvme_admin": false, 00:10:16.006 "nvme_io": false, 00:10:16.006 "nvme_io_md": false, 00:10:16.006 "write_zeroes": true, 00:10:16.006 "zcopy": false, 00:10:16.006 "get_zone_info": false, 00:10:16.006 "zone_management": false, 00:10:16.006 "zone_append": false, 00:10:16.006 "compare": false, 00:10:16.006 "compare_and_write": false, 00:10:16.006 "abort": false, 00:10:16.006 "seek_hole": false, 00:10:16.006 "seek_data": false, 00:10:16.006 "copy": false, 00:10:16.006 "nvme_iov_md": false 00:10:16.006 }, 00:10:16.006 "memory_domains": [ 00:10:16.006 { 00:10:16.006 "dma_device_id": "system", 00:10:16.006 "dma_device_type": 1 00:10:16.006 }, 00:10:16.006 { 00:10:16.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.006 "dma_device_type": 2 00:10:16.006 }, 00:10:16.006 { 00:10:16.006 "dma_device_id": "system", 00:10:16.006 "dma_device_type": 1 00:10:16.006 }, 00:10:16.006 { 00:10:16.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.006 "dma_device_type": 2 00:10:16.006 }, 00:10:16.006 { 00:10:16.006 "dma_device_id": "system", 00:10:16.006 "dma_device_type": 1 00:10:16.006 }, 00:10:16.006 { 00:10:16.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.006 "dma_device_type": 2 00:10:16.006 }, 00:10:16.006 { 00:10:16.006 "dma_device_id": "system", 00:10:16.006 "dma_device_type": 1 00:10:16.006 }, 00:10:16.006 { 00:10:16.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.006 "dma_device_type": 2 00:10:16.006 } 00:10:16.006 ], 00:10:16.006 "driver_specific": { 00:10:16.006 "raid": { 00:10:16.006 "uuid": "530b6afe-3989-4c7a-9675-fc5dbfd70d05", 00:10:16.006 "strip_size_kb": 0, 00:10:16.006 "state": "online", 00:10:16.006 "raid_level": "raid1", 00:10:16.006 "superblock": true, 00:10:16.006 "num_base_bdevs": 4, 00:10:16.006 "num_base_bdevs_discovered": 4, 00:10:16.006 "num_base_bdevs_operational": 4, 00:10:16.006 "base_bdevs_list": [ 00:10:16.006 { 00:10:16.006 "name": "NewBaseBdev", 00:10:16.006 "uuid": "42b80d4b-af00-40d9-a2ff-541a4b1a8c13", 00:10:16.006 "is_configured": true, 00:10:16.006 "data_offset": 2048, 00:10:16.006 "data_size": 63488 00:10:16.006 }, 00:10:16.006 { 00:10:16.006 "name": "BaseBdev2", 00:10:16.006 "uuid": "acfc8eb7-899b-43cf-b8cb-6aac6d0ba500", 00:10:16.006 "is_configured": true, 00:10:16.006 "data_offset": 2048, 00:10:16.006 "data_size": 63488 00:10:16.006 }, 00:10:16.006 { 00:10:16.006 "name": "BaseBdev3", 00:10:16.006 "uuid": "1a855df7-1205-4637-9fa8-09beb011cf72", 00:10:16.006 "is_configured": true, 00:10:16.006 "data_offset": 2048, 00:10:16.006 "data_size": 63488 00:10:16.006 }, 00:10:16.006 { 00:10:16.006 "name": "BaseBdev4", 00:10:16.006 "uuid": "bf3cde8e-8746-4aa0-8c9e-e62749e163e1", 00:10:16.006 "is_configured": true, 00:10:16.006 "data_offset": 2048, 00:10:16.006 "data_size": 63488 00:10:16.006 } 00:10:16.006 ] 00:10:16.006 } 00:10:16.006 } 00:10:16.006 }' 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:16.006 BaseBdev2 00:10:16.006 BaseBdev3 00:10:16.006 BaseBdev4' 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.006 23:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.006 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.007 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.267 [2024-11-02 23:50:10.160404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.267 [2024-11-02 23:50:10.160433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.267 [2024-11-02 23:50:10.160521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.267 [2024-11-02 23:50:10.160797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.267 [2024-11-02 23:50:10.160818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84453 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 84453 ']' 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 84453 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84453 00:10:16.267 killing process with pid 84453 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84453' 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 84453 00:10:16.267 [2024-11-02 23:50:10.207477] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.267 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 84453 00:10:16.267 [2024-11-02 23:50:10.248870] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.528 23:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:16.528 00:10:16.528 real 0m9.719s 00:10:16.528 user 0m16.580s 00:10:16.528 sys 0m2.187s 00:10:16.528 ************************************ 00:10:16.528 END TEST raid_state_function_test_sb 00:10:16.528 ************************************ 00:10:16.528 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:16.528 23:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.528 23:50:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:16.528 23:50:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:16.528 23:50:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:16.528 23:50:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.528 ************************************ 00:10:16.528 START TEST raid_superblock_test 00:10:16.528 ************************************ 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85101 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85101 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 85101 ']' 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:16.528 23:50:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.836 [2024-11-02 23:50:10.620387] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:16.836 [2024-11-02 23:50:10.620713] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85101 ] 00:10:16.836 [2024-11-02 23:50:10.776183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.836 [2024-11-02 23:50:10.805345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.836 [2024-11-02 23:50:10.847671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.836 [2024-11-02 23:50:10.847842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.405 malloc1 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.405 [2024-11-02 23:50:11.482293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.405 [2024-11-02 23:50:11.482361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.405 [2024-11-02 23:50:11.482392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:17.405 [2024-11-02 23:50:11.482407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.405 [2024-11-02 23:50:11.484554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.405 [2024-11-02 23:50:11.484600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.405 pt1 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.405 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.664 malloc2 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.664 [2024-11-02 23:50:11.510895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.664 [2024-11-02 23:50:11.510984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.664 [2024-11-02 23:50:11.511034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:17.664 [2024-11-02 23:50:11.511069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.664 [2024-11-02 23:50:11.513112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.664 [2024-11-02 23:50:11.513181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.664 pt2 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.664 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.664 malloc3 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.665 [2024-11-02 23:50:11.543331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.665 [2024-11-02 23:50:11.543450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.665 [2024-11-02 23:50:11.543506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:17.665 [2024-11-02 23:50:11.543541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.665 [2024-11-02 23:50:11.545657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.665 [2024-11-02 23:50:11.545732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.665 pt3 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.665 malloc4 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.665 [2024-11-02 23:50:11.585567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:17.665 [2024-11-02 23:50:11.585680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.665 [2024-11-02 23:50:11.585715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:17.665 [2024-11-02 23:50:11.585755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.665 [2024-11-02 23:50:11.587878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.665 [2024-11-02 23:50:11.587948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:17.665 pt4 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.665 [2024-11-02 23:50:11.597542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.665 [2024-11-02 23:50:11.599432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.665 [2024-11-02 23:50:11.599546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.665 [2024-11-02 23:50:11.599614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:17.665 [2024-11-02 23:50:11.599831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:17.665 [2024-11-02 23:50:11.599881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.665 [2024-11-02 23:50:11.600166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:17.665 [2024-11-02 23:50:11.600353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:17.665 [2024-11-02 23:50:11.600399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:17.665 [2024-11-02 23:50:11.600581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.665 "name": "raid_bdev1", 00:10:17.665 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:17.665 "strip_size_kb": 0, 00:10:17.665 "state": "online", 00:10:17.665 "raid_level": "raid1", 00:10:17.665 "superblock": true, 00:10:17.665 "num_base_bdevs": 4, 00:10:17.665 "num_base_bdevs_discovered": 4, 00:10:17.665 "num_base_bdevs_operational": 4, 00:10:17.665 "base_bdevs_list": [ 00:10:17.665 { 00:10:17.665 "name": "pt1", 00:10:17.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.665 "is_configured": true, 00:10:17.665 "data_offset": 2048, 00:10:17.665 "data_size": 63488 00:10:17.665 }, 00:10:17.665 { 00:10:17.665 "name": "pt2", 00:10:17.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.665 "is_configured": true, 00:10:17.665 "data_offset": 2048, 00:10:17.665 "data_size": 63488 00:10:17.665 }, 00:10:17.665 { 00:10:17.665 "name": "pt3", 00:10:17.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.665 "is_configured": true, 00:10:17.665 "data_offset": 2048, 00:10:17.665 "data_size": 63488 00:10:17.665 }, 00:10:17.665 { 00:10:17.665 "name": "pt4", 00:10:17.665 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.665 "is_configured": true, 00:10:17.665 "data_offset": 2048, 00:10:17.665 "data_size": 63488 00:10:17.665 } 00:10:17.665 ] 00:10:17.665 }' 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.665 23:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.923 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:17.923 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:17.923 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.923 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.923 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.923 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.923 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.923 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.182 [2024-11-02 23:50:12.021153] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.182 "name": "raid_bdev1", 00:10:18.182 "aliases": [ 00:10:18.182 "00c38887-6cb3-4764-ae1b-0984da187245" 00:10:18.182 ], 00:10:18.182 "product_name": "Raid Volume", 00:10:18.182 "block_size": 512, 00:10:18.182 "num_blocks": 63488, 00:10:18.182 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:18.182 "assigned_rate_limits": { 00:10:18.182 "rw_ios_per_sec": 0, 00:10:18.182 "rw_mbytes_per_sec": 0, 00:10:18.182 "r_mbytes_per_sec": 0, 00:10:18.182 "w_mbytes_per_sec": 0 00:10:18.182 }, 00:10:18.182 "claimed": false, 00:10:18.182 "zoned": false, 00:10:18.182 "supported_io_types": { 00:10:18.182 "read": true, 00:10:18.182 "write": true, 00:10:18.182 "unmap": false, 00:10:18.182 "flush": false, 00:10:18.182 "reset": true, 00:10:18.182 "nvme_admin": false, 00:10:18.182 "nvme_io": false, 00:10:18.182 "nvme_io_md": false, 00:10:18.182 "write_zeroes": true, 00:10:18.182 "zcopy": false, 00:10:18.182 "get_zone_info": false, 00:10:18.182 "zone_management": false, 00:10:18.182 "zone_append": false, 00:10:18.182 "compare": false, 00:10:18.182 "compare_and_write": false, 00:10:18.182 "abort": false, 00:10:18.182 "seek_hole": false, 00:10:18.182 "seek_data": false, 00:10:18.182 "copy": false, 00:10:18.182 "nvme_iov_md": false 00:10:18.182 }, 00:10:18.182 "memory_domains": [ 00:10:18.182 { 00:10:18.182 "dma_device_id": "system", 00:10:18.182 "dma_device_type": 1 00:10:18.182 }, 00:10:18.182 { 00:10:18.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.182 "dma_device_type": 2 00:10:18.182 }, 00:10:18.182 { 00:10:18.182 "dma_device_id": "system", 00:10:18.182 "dma_device_type": 1 00:10:18.182 }, 00:10:18.182 { 00:10:18.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.182 "dma_device_type": 2 00:10:18.182 }, 00:10:18.182 { 00:10:18.182 "dma_device_id": "system", 00:10:18.182 "dma_device_type": 1 00:10:18.182 }, 00:10:18.182 { 00:10:18.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.182 "dma_device_type": 2 00:10:18.182 }, 00:10:18.182 { 00:10:18.182 "dma_device_id": "system", 00:10:18.182 "dma_device_type": 1 00:10:18.182 }, 00:10:18.182 { 00:10:18.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.182 "dma_device_type": 2 00:10:18.182 } 00:10:18.182 ], 00:10:18.182 "driver_specific": { 00:10:18.182 "raid": { 00:10:18.182 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:18.182 "strip_size_kb": 0, 00:10:18.182 "state": "online", 00:10:18.182 "raid_level": "raid1", 00:10:18.182 "superblock": true, 00:10:18.182 "num_base_bdevs": 4, 00:10:18.182 "num_base_bdevs_discovered": 4, 00:10:18.182 "num_base_bdevs_operational": 4, 00:10:18.182 "base_bdevs_list": [ 00:10:18.182 { 00:10:18.182 "name": "pt1", 00:10:18.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.182 "is_configured": true, 00:10:18.182 "data_offset": 2048, 00:10:18.182 "data_size": 63488 00:10:18.182 }, 00:10:18.182 { 00:10:18.182 "name": "pt2", 00:10:18.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.182 "is_configured": true, 00:10:18.182 "data_offset": 2048, 00:10:18.182 "data_size": 63488 00:10:18.182 }, 00:10:18.182 { 00:10:18.182 "name": "pt3", 00:10:18.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.182 "is_configured": true, 00:10:18.182 "data_offset": 2048, 00:10:18.182 "data_size": 63488 00:10:18.182 }, 00:10:18.182 { 00:10:18.182 "name": "pt4", 00:10:18.182 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.182 "is_configured": true, 00:10:18.182 "data_offset": 2048, 00:10:18.182 "data_size": 63488 00:10:18.182 } 00:10:18.182 ] 00:10:18.182 } 00:10:18.182 } 00:10:18.182 }' 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.182 pt2 00:10:18.182 pt3 00:10:18.182 pt4' 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.182 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.183 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.443 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:18.444 [2024-11-02 23:50:12.364486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=00c38887-6cb3-4764-ae1b-0984da187245 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 00c38887-6cb3-4764-ae1b-0984da187245 ']' 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.444 [2024-11-02 23:50:12.412117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.444 [2024-11-02 23:50:12.412148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.444 [2024-11-02 23:50:12.412226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.444 [2024-11-02 23:50:12.412314] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.444 [2024-11-02 23:50:12.412325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.444 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 [2024-11-02 23:50:12.579891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:18.703 [2024-11-02 23:50:12.581864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:18.703 [2024-11-02 23:50:12.581954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:18.703 [2024-11-02 23:50:12.582002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:18.703 [2024-11-02 23:50:12.582084] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:18.703 [2024-11-02 23:50:12.582164] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:18.703 [2024-11-02 23:50:12.582249] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:18.703 [2024-11-02 23:50:12.582298] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:18.703 [2024-11-02 23:50:12.582363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.703 [2024-11-02 23:50:12.582417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:18.703 request: 00:10:18.703 { 00:10:18.703 "name": "raid_bdev1", 00:10:18.703 "raid_level": "raid1", 00:10:18.703 "base_bdevs": [ 00:10:18.703 "malloc1", 00:10:18.703 "malloc2", 00:10:18.703 "malloc3", 00:10:18.703 "malloc4" 00:10:18.703 ], 00:10:18.703 "superblock": false, 00:10:18.703 "method": "bdev_raid_create", 00:10:18.703 "req_id": 1 00:10:18.703 } 00:10:18.703 Got JSON-RPC error response 00:10:18.703 response: 00:10:18.703 { 00:10:18.703 "code": -17, 00:10:18.703 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:18.703 } 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:18.703 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.704 [2024-11-02 23:50:12.647734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:18.704 [2024-11-02 23:50:12.647835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.704 [2024-11-02 23:50:12.647873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:18.704 [2024-11-02 23:50:12.647899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.704 [2024-11-02 23:50:12.650050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.704 [2024-11-02 23:50:12.650115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:18.704 [2024-11-02 23:50:12.650210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:18.704 [2024-11-02 23:50:12.650279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:18.704 pt1 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.704 "name": "raid_bdev1", 00:10:18.704 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:18.704 "strip_size_kb": 0, 00:10:18.704 "state": "configuring", 00:10:18.704 "raid_level": "raid1", 00:10:18.704 "superblock": true, 00:10:18.704 "num_base_bdevs": 4, 00:10:18.704 "num_base_bdevs_discovered": 1, 00:10:18.704 "num_base_bdevs_operational": 4, 00:10:18.704 "base_bdevs_list": [ 00:10:18.704 { 00:10:18.704 "name": "pt1", 00:10:18.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.704 "is_configured": true, 00:10:18.704 "data_offset": 2048, 00:10:18.704 "data_size": 63488 00:10:18.704 }, 00:10:18.704 { 00:10:18.704 "name": null, 00:10:18.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.704 "is_configured": false, 00:10:18.704 "data_offset": 2048, 00:10:18.704 "data_size": 63488 00:10:18.704 }, 00:10:18.704 { 00:10:18.704 "name": null, 00:10:18.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.704 "is_configured": false, 00:10:18.704 "data_offset": 2048, 00:10:18.704 "data_size": 63488 00:10:18.704 }, 00:10:18.704 { 00:10:18.704 "name": null, 00:10:18.704 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.704 "is_configured": false, 00:10:18.704 "data_offset": 2048, 00:10:18.704 "data_size": 63488 00:10:18.704 } 00:10:18.704 ] 00:10:18.704 }' 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.704 23:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.271 [2024-11-02 23:50:13.082988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.271 [2024-11-02 23:50:13.083052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.271 [2024-11-02 23:50:13.083077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:19.271 [2024-11-02 23:50:13.083087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.271 [2024-11-02 23:50:13.083498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.271 [2024-11-02 23:50:13.083523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.271 [2024-11-02 23:50:13.083621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.271 [2024-11-02 23:50:13.083645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.271 pt2 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.271 [2024-11-02 23:50:13.094979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.271 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.272 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.272 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.272 "name": "raid_bdev1", 00:10:19.272 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:19.272 "strip_size_kb": 0, 00:10:19.272 "state": "configuring", 00:10:19.272 "raid_level": "raid1", 00:10:19.272 "superblock": true, 00:10:19.272 "num_base_bdevs": 4, 00:10:19.272 "num_base_bdevs_discovered": 1, 00:10:19.272 "num_base_bdevs_operational": 4, 00:10:19.272 "base_bdevs_list": [ 00:10:19.272 { 00:10:19.272 "name": "pt1", 00:10:19.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.272 "is_configured": true, 00:10:19.272 "data_offset": 2048, 00:10:19.272 "data_size": 63488 00:10:19.272 }, 00:10:19.272 { 00:10:19.272 "name": null, 00:10:19.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.272 "is_configured": false, 00:10:19.272 "data_offset": 0, 00:10:19.272 "data_size": 63488 00:10:19.272 }, 00:10:19.272 { 00:10:19.272 "name": null, 00:10:19.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.272 "is_configured": false, 00:10:19.272 "data_offset": 2048, 00:10:19.272 "data_size": 63488 00:10:19.272 }, 00:10:19.272 { 00:10:19.272 "name": null, 00:10:19.272 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.272 "is_configured": false, 00:10:19.272 "data_offset": 2048, 00:10:19.272 "data_size": 63488 00:10:19.272 } 00:10:19.272 ] 00:10:19.272 }' 00:10:19.272 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.272 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.531 [2024-11-02 23:50:13.554271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.531 [2024-11-02 23:50:13.554373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.531 [2024-11-02 23:50:13.554395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:19.531 [2024-11-02 23:50:13.554406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.531 [2024-11-02 23:50:13.554818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.531 [2024-11-02 23:50:13.554839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.531 [2024-11-02 23:50:13.554915] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.531 [2024-11-02 23:50:13.554940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.531 pt2 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.531 [2024-11-02 23:50:13.562207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:19.531 [2024-11-02 23:50:13.562257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.531 [2024-11-02 23:50:13.562285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.531 [2024-11-02 23:50:13.562295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.531 [2024-11-02 23:50:13.562665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.531 [2024-11-02 23:50:13.562683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:19.531 [2024-11-02 23:50:13.562735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:19.531 [2024-11-02 23:50:13.562753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:19.531 pt3 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.531 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.531 [2024-11-02 23:50:13.570207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:19.531 [2024-11-02 23:50:13.570264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.531 [2024-11-02 23:50:13.570279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:19.531 [2024-11-02 23:50:13.570288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.531 [2024-11-02 23:50:13.570584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.531 [2024-11-02 23:50:13.570601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:19.531 [2024-11-02 23:50:13.570654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:19.531 [2024-11-02 23:50:13.570673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:19.531 [2024-11-02 23:50:13.570786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:19.531 [2024-11-02 23:50:13.570798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:19.531 [2024-11-02 23:50:13.571031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:19.531 [2024-11-02 23:50:13.571168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:19.532 [2024-11-02 23:50:13.571178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:19.532 [2024-11-02 23:50:13.571281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.532 pt4 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.532 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.791 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.791 "name": "raid_bdev1", 00:10:19.791 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:19.791 "strip_size_kb": 0, 00:10:19.791 "state": "online", 00:10:19.791 "raid_level": "raid1", 00:10:19.791 "superblock": true, 00:10:19.791 "num_base_bdevs": 4, 00:10:19.791 "num_base_bdevs_discovered": 4, 00:10:19.791 "num_base_bdevs_operational": 4, 00:10:19.791 "base_bdevs_list": [ 00:10:19.791 { 00:10:19.791 "name": "pt1", 00:10:19.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.791 "is_configured": true, 00:10:19.791 "data_offset": 2048, 00:10:19.791 "data_size": 63488 00:10:19.791 }, 00:10:19.791 { 00:10:19.791 "name": "pt2", 00:10:19.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.791 "is_configured": true, 00:10:19.791 "data_offset": 2048, 00:10:19.791 "data_size": 63488 00:10:19.791 }, 00:10:19.791 { 00:10:19.791 "name": "pt3", 00:10:19.791 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.791 "is_configured": true, 00:10:19.791 "data_offset": 2048, 00:10:19.791 "data_size": 63488 00:10:19.791 }, 00:10:19.791 { 00:10:19.791 "name": "pt4", 00:10:19.791 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.791 "is_configured": true, 00:10:19.791 "data_offset": 2048, 00:10:19.791 "data_size": 63488 00:10:19.791 } 00:10:19.791 ] 00:10:19.791 }' 00:10:19.791 23:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.791 23:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.048 [2024-11-02 23:50:14.073663] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.048 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.048 "name": "raid_bdev1", 00:10:20.048 "aliases": [ 00:10:20.048 "00c38887-6cb3-4764-ae1b-0984da187245" 00:10:20.048 ], 00:10:20.048 "product_name": "Raid Volume", 00:10:20.048 "block_size": 512, 00:10:20.048 "num_blocks": 63488, 00:10:20.048 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:20.048 "assigned_rate_limits": { 00:10:20.048 "rw_ios_per_sec": 0, 00:10:20.048 "rw_mbytes_per_sec": 0, 00:10:20.048 "r_mbytes_per_sec": 0, 00:10:20.048 "w_mbytes_per_sec": 0 00:10:20.048 }, 00:10:20.048 "claimed": false, 00:10:20.048 "zoned": false, 00:10:20.048 "supported_io_types": { 00:10:20.048 "read": true, 00:10:20.048 "write": true, 00:10:20.048 "unmap": false, 00:10:20.048 "flush": false, 00:10:20.048 "reset": true, 00:10:20.048 "nvme_admin": false, 00:10:20.048 "nvme_io": false, 00:10:20.048 "nvme_io_md": false, 00:10:20.048 "write_zeroes": true, 00:10:20.048 "zcopy": false, 00:10:20.048 "get_zone_info": false, 00:10:20.048 "zone_management": false, 00:10:20.048 "zone_append": false, 00:10:20.048 "compare": false, 00:10:20.048 "compare_and_write": false, 00:10:20.048 "abort": false, 00:10:20.048 "seek_hole": false, 00:10:20.048 "seek_data": false, 00:10:20.048 "copy": false, 00:10:20.048 "nvme_iov_md": false 00:10:20.048 }, 00:10:20.048 "memory_domains": [ 00:10:20.048 { 00:10:20.048 "dma_device_id": "system", 00:10:20.048 "dma_device_type": 1 00:10:20.048 }, 00:10:20.048 { 00:10:20.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.048 "dma_device_type": 2 00:10:20.048 }, 00:10:20.048 { 00:10:20.048 "dma_device_id": "system", 00:10:20.048 "dma_device_type": 1 00:10:20.048 }, 00:10:20.048 { 00:10:20.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.048 "dma_device_type": 2 00:10:20.048 }, 00:10:20.048 { 00:10:20.048 "dma_device_id": "system", 00:10:20.048 "dma_device_type": 1 00:10:20.048 }, 00:10:20.048 { 00:10:20.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.048 "dma_device_type": 2 00:10:20.048 }, 00:10:20.048 { 00:10:20.048 "dma_device_id": "system", 00:10:20.048 "dma_device_type": 1 00:10:20.048 }, 00:10:20.048 { 00:10:20.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.048 "dma_device_type": 2 00:10:20.048 } 00:10:20.048 ], 00:10:20.048 "driver_specific": { 00:10:20.048 "raid": { 00:10:20.048 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:20.048 "strip_size_kb": 0, 00:10:20.048 "state": "online", 00:10:20.048 "raid_level": "raid1", 00:10:20.048 "superblock": true, 00:10:20.048 "num_base_bdevs": 4, 00:10:20.048 "num_base_bdevs_discovered": 4, 00:10:20.048 "num_base_bdevs_operational": 4, 00:10:20.048 "base_bdevs_list": [ 00:10:20.048 { 00:10:20.048 "name": "pt1", 00:10:20.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.048 "is_configured": true, 00:10:20.048 "data_offset": 2048, 00:10:20.048 "data_size": 63488 00:10:20.048 }, 00:10:20.048 { 00:10:20.048 "name": "pt2", 00:10:20.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.048 "is_configured": true, 00:10:20.048 "data_offset": 2048, 00:10:20.049 "data_size": 63488 00:10:20.049 }, 00:10:20.049 { 00:10:20.049 "name": "pt3", 00:10:20.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.049 "is_configured": true, 00:10:20.049 "data_offset": 2048, 00:10:20.049 "data_size": 63488 00:10:20.049 }, 00:10:20.049 { 00:10:20.049 "name": "pt4", 00:10:20.049 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.049 "is_configured": true, 00:10:20.049 "data_offset": 2048, 00:10:20.049 "data_size": 63488 00:10:20.049 } 00:10:20.049 ] 00:10:20.049 } 00:10:20.049 } 00:10:20.049 }' 00:10:20.049 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.307 pt2 00:10:20.307 pt3 00:10:20.307 pt4' 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.307 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.308 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:20.308 [2024-11-02 23:50:14.397109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 00c38887-6cb3-4764-ae1b-0984da187245 '!=' 00c38887-6cb3-4764-ae1b-0984da187245 ']' 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.567 [2024-11-02 23:50:14.444791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.567 "name": "raid_bdev1", 00:10:20.567 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:20.567 "strip_size_kb": 0, 00:10:20.567 "state": "online", 00:10:20.567 "raid_level": "raid1", 00:10:20.567 "superblock": true, 00:10:20.567 "num_base_bdevs": 4, 00:10:20.567 "num_base_bdevs_discovered": 3, 00:10:20.567 "num_base_bdevs_operational": 3, 00:10:20.567 "base_bdevs_list": [ 00:10:20.567 { 00:10:20.567 "name": null, 00:10:20.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.567 "is_configured": false, 00:10:20.567 "data_offset": 0, 00:10:20.567 "data_size": 63488 00:10:20.567 }, 00:10:20.567 { 00:10:20.567 "name": "pt2", 00:10:20.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.567 "is_configured": true, 00:10:20.567 "data_offset": 2048, 00:10:20.567 "data_size": 63488 00:10:20.567 }, 00:10:20.567 { 00:10:20.567 "name": "pt3", 00:10:20.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.567 "is_configured": true, 00:10:20.567 "data_offset": 2048, 00:10:20.567 "data_size": 63488 00:10:20.567 }, 00:10:20.567 { 00:10:20.567 "name": "pt4", 00:10:20.567 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.567 "is_configured": true, 00:10:20.567 "data_offset": 2048, 00:10:20.567 "data_size": 63488 00:10:20.567 } 00:10:20.567 ] 00:10:20.567 }' 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.567 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.826 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.826 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.826 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.826 [2024-11-02 23:50:14.911913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.826 [2024-11-02 23:50:14.911999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.826 [2024-11-02 23:50:14.912108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.826 [2024-11-02 23:50:14.912200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.826 [2024-11-02 23:50:14.912285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:20.826 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.086 23:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.086 [2024-11-02 23:50:15.007754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.086 [2024-11-02 23:50:15.007815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.086 [2024-11-02 23:50:15.007847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:21.086 [2024-11-02 23:50:15.007858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.086 [2024-11-02 23:50:15.009967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.086 [2024-11-02 23:50:15.010048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.086 [2024-11-02 23:50:15.010122] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:21.086 [2024-11-02 23:50:15.010159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.086 pt2 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.086 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.087 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.087 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.087 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.087 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.087 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.087 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.087 "name": "raid_bdev1", 00:10:21.087 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:21.087 "strip_size_kb": 0, 00:10:21.087 "state": "configuring", 00:10:21.087 "raid_level": "raid1", 00:10:21.087 "superblock": true, 00:10:21.087 "num_base_bdevs": 4, 00:10:21.087 "num_base_bdevs_discovered": 1, 00:10:21.087 "num_base_bdevs_operational": 3, 00:10:21.087 "base_bdevs_list": [ 00:10:21.087 { 00:10:21.087 "name": null, 00:10:21.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.087 "is_configured": false, 00:10:21.087 "data_offset": 2048, 00:10:21.087 "data_size": 63488 00:10:21.087 }, 00:10:21.087 { 00:10:21.087 "name": "pt2", 00:10:21.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.087 "is_configured": true, 00:10:21.087 "data_offset": 2048, 00:10:21.087 "data_size": 63488 00:10:21.087 }, 00:10:21.087 { 00:10:21.087 "name": null, 00:10:21.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.087 "is_configured": false, 00:10:21.087 "data_offset": 2048, 00:10:21.087 "data_size": 63488 00:10:21.087 }, 00:10:21.087 { 00:10:21.087 "name": null, 00:10:21.087 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.087 "is_configured": false, 00:10:21.087 "data_offset": 2048, 00:10:21.087 "data_size": 63488 00:10:21.087 } 00:10:21.087 ] 00:10:21.087 }' 00:10:21.087 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.087 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.655 [2024-11-02 23:50:15.447035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:21.655 [2024-11-02 23:50:15.447170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.655 [2024-11-02 23:50:15.447213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:21.655 [2024-11-02 23:50:15.447249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.655 [2024-11-02 23:50:15.447677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.655 [2024-11-02 23:50:15.447734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:21.655 [2024-11-02 23:50:15.447852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:21.655 [2024-11-02 23:50:15.447914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.655 pt3 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.655 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.656 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.656 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.656 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.656 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.656 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.656 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.656 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.656 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.656 "name": "raid_bdev1", 00:10:21.656 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:21.656 "strip_size_kb": 0, 00:10:21.656 "state": "configuring", 00:10:21.656 "raid_level": "raid1", 00:10:21.656 "superblock": true, 00:10:21.656 "num_base_bdevs": 4, 00:10:21.656 "num_base_bdevs_discovered": 2, 00:10:21.656 "num_base_bdevs_operational": 3, 00:10:21.656 "base_bdevs_list": [ 00:10:21.656 { 00:10:21.656 "name": null, 00:10:21.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.656 "is_configured": false, 00:10:21.656 "data_offset": 2048, 00:10:21.656 "data_size": 63488 00:10:21.656 }, 00:10:21.656 { 00:10:21.656 "name": "pt2", 00:10:21.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.656 "is_configured": true, 00:10:21.656 "data_offset": 2048, 00:10:21.656 "data_size": 63488 00:10:21.656 }, 00:10:21.656 { 00:10:21.656 "name": "pt3", 00:10:21.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.656 "is_configured": true, 00:10:21.656 "data_offset": 2048, 00:10:21.656 "data_size": 63488 00:10:21.656 }, 00:10:21.656 { 00:10:21.656 "name": null, 00:10:21.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.656 "is_configured": false, 00:10:21.656 "data_offset": 2048, 00:10:21.656 "data_size": 63488 00:10:21.656 } 00:10:21.656 ] 00:10:21.656 }' 00:10:21.656 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.656 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.915 [2024-11-02 23:50:15.898252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:21.915 [2024-11-02 23:50:15.898326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.915 [2024-11-02 23:50:15.898349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:21.915 [2024-11-02 23:50:15.898360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.915 [2024-11-02 23:50:15.898816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.915 [2024-11-02 23:50:15.898837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:21.915 [2024-11-02 23:50:15.898917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:21.915 [2024-11-02 23:50:15.898943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:21.915 [2024-11-02 23:50:15.899042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:21.915 [2024-11-02 23:50:15.899054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.915 [2024-11-02 23:50:15.899298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:21.915 [2024-11-02 23:50:15.899426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:21.915 [2024-11-02 23:50:15.899436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:21.915 [2024-11-02 23:50:15.899550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.915 pt4 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.915 "name": "raid_bdev1", 00:10:21.915 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:21.915 "strip_size_kb": 0, 00:10:21.915 "state": "online", 00:10:21.915 "raid_level": "raid1", 00:10:21.915 "superblock": true, 00:10:21.915 "num_base_bdevs": 4, 00:10:21.915 "num_base_bdevs_discovered": 3, 00:10:21.915 "num_base_bdevs_operational": 3, 00:10:21.915 "base_bdevs_list": [ 00:10:21.915 { 00:10:21.915 "name": null, 00:10:21.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.915 "is_configured": false, 00:10:21.915 "data_offset": 2048, 00:10:21.915 "data_size": 63488 00:10:21.915 }, 00:10:21.915 { 00:10:21.915 "name": "pt2", 00:10:21.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.915 "is_configured": true, 00:10:21.915 "data_offset": 2048, 00:10:21.915 "data_size": 63488 00:10:21.915 }, 00:10:21.915 { 00:10:21.915 "name": "pt3", 00:10:21.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.915 "is_configured": true, 00:10:21.915 "data_offset": 2048, 00:10:21.915 "data_size": 63488 00:10:21.915 }, 00:10:21.915 { 00:10:21.915 "name": "pt4", 00:10:21.915 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.915 "is_configured": true, 00:10:21.915 "data_offset": 2048, 00:10:21.915 "data_size": 63488 00:10:21.915 } 00:10:21.915 ] 00:10:21.915 }' 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.915 23:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.486 [2024-11-02 23:50:16.337464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.486 [2024-11-02 23:50:16.337550] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.486 [2024-11-02 23:50:16.337645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.486 [2024-11-02 23:50:16.337736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.486 [2024-11-02 23:50:16.337809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.486 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.486 [2024-11-02 23:50:16.413337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.487 [2024-11-02 23:50:16.413391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.487 [2024-11-02 23:50:16.413429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:22.487 [2024-11-02 23:50:16.413438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.487 [2024-11-02 23:50:16.415783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.487 [2024-11-02 23:50:16.415831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.487 [2024-11-02 23:50:16.415924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:22.487 [2024-11-02 23:50:16.415964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.487 [2024-11-02 23:50:16.416082] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:22.487 [2024-11-02 23:50:16.416095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.487 [2024-11-02 23:50:16.416121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:10:22.487 [2024-11-02 23:50:16.416179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.487 [2024-11-02 23:50:16.416290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.487 pt1 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.487 "name": "raid_bdev1", 00:10:22.487 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:22.487 "strip_size_kb": 0, 00:10:22.487 "state": "configuring", 00:10:22.487 "raid_level": "raid1", 00:10:22.487 "superblock": true, 00:10:22.487 "num_base_bdevs": 4, 00:10:22.487 "num_base_bdevs_discovered": 2, 00:10:22.487 "num_base_bdevs_operational": 3, 00:10:22.487 "base_bdevs_list": [ 00:10:22.487 { 00:10:22.487 "name": null, 00:10:22.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.487 "is_configured": false, 00:10:22.487 "data_offset": 2048, 00:10:22.487 "data_size": 63488 00:10:22.487 }, 00:10:22.487 { 00:10:22.487 "name": "pt2", 00:10:22.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.487 "is_configured": true, 00:10:22.487 "data_offset": 2048, 00:10:22.487 "data_size": 63488 00:10:22.487 }, 00:10:22.487 { 00:10:22.487 "name": "pt3", 00:10:22.487 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.487 "is_configured": true, 00:10:22.487 "data_offset": 2048, 00:10:22.487 "data_size": 63488 00:10:22.487 }, 00:10:22.487 { 00:10:22.487 "name": null, 00:10:22.487 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.487 "is_configured": false, 00:10:22.487 "data_offset": 2048, 00:10:22.487 "data_size": 63488 00:10:22.487 } 00:10:22.487 ] 00:10:22.487 }' 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.487 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.057 [2024-11-02 23:50:16.936483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:23.057 [2024-11-02 23:50:16.936601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.057 [2024-11-02 23:50:16.936642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:23.057 [2024-11-02 23:50:16.936673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.057 [2024-11-02 23:50:16.937104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.057 [2024-11-02 23:50:16.937177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:23.057 [2024-11-02 23:50:16.937288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:23.057 [2024-11-02 23:50:16.937344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:23.057 [2024-11-02 23:50:16.937480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:10:23.057 [2024-11-02 23:50:16.937526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.057 [2024-11-02 23:50:16.937820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:10:23.057 [2024-11-02 23:50:16.937979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:10:23.057 [2024-11-02 23:50:16.938016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:10:23.057 [2024-11-02 23:50:16.938171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.057 pt4 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.057 "name": "raid_bdev1", 00:10:23.057 "uuid": "00c38887-6cb3-4764-ae1b-0984da187245", 00:10:23.057 "strip_size_kb": 0, 00:10:23.057 "state": "online", 00:10:23.057 "raid_level": "raid1", 00:10:23.057 "superblock": true, 00:10:23.057 "num_base_bdevs": 4, 00:10:23.057 "num_base_bdevs_discovered": 3, 00:10:23.057 "num_base_bdevs_operational": 3, 00:10:23.057 "base_bdevs_list": [ 00:10:23.057 { 00:10:23.057 "name": null, 00:10:23.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.057 "is_configured": false, 00:10:23.057 "data_offset": 2048, 00:10:23.057 "data_size": 63488 00:10:23.057 }, 00:10:23.057 { 00:10:23.057 "name": "pt2", 00:10:23.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.057 "is_configured": true, 00:10:23.057 "data_offset": 2048, 00:10:23.057 "data_size": 63488 00:10:23.057 }, 00:10:23.057 { 00:10:23.057 "name": "pt3", 00:10:23.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.057 "is_configured": true, 00:10:23.057 "data_offset": 2048, 00:10:23.057 "data_size": 63488 00:10:23.057 }, 00:10:23.057 { 00:10:23.057 "name": "pt4", 00:10:23.057 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:23.057 "is_configured": true, 00:10:23.057 "data_offset": 2048, 00:10:23.057 "data_size": 63488 00:10:23.057 } 00:10:23.057 ] 00:10:23.057 }' 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.057 23:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.318 23:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:23.318 23:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:23.318 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.318 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.318 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.578 [2024-11-02 23:50:17.447948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 00c38887-6cb3-4764-ae1b-0984da187245 '!=' 00c38887-6cb3-4764-ae1b-0984da187245 ']' 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85101 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 85101 ']' 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 85101 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85101 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:23.578 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85101' 00:10:23.579 killing process with pid 85101 00:10:23.579 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 85101 00:10:23.579 [2024-11-02 23:50:17.532276] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.579 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 85101 00:10:23.579 [2024-11-02 23:50:17.532481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.579 [2024-11-02 23:50:17.532562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.579 [2024-11-02 23:50:17.532620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:10:23.579 [2024-11-02 23:50:17.577510] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.837 23:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:23.837 00:10:23.837 real 0m7.257s 00:10:23.837 user 0m12.244s 00:10:23.837 sys 0m1.601s 00:10:23.837 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.837 23:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.837 ************************************ 00:10:23.837 END TEST raid_superblock_test 00:10:23.837 ************************************ 00:10:23.837 23:50:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:23.837 23:50:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:23.837 23:50:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.837 23:50:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.837 ************************************ 00:10:23.837 START TEST raid_read_error_test 00:10:23.837 ************************************ 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HjoCX8NjBS 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85578 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85578 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 85578 ']' 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:23.837 23:50:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.099 [2024-11-02 23:50:17.966558] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:24.099 [2024-11-02 23:50:17.966770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85578 ] 00:10:24.099 [2024-11-02 23:50:18.119087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.099 [2024-11-02 23:50:18.147961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.099 [2024-11-02 23:50:18.189502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.099 [2024-11-02 23:50:18.189618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.040 BaseBdev1_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.040 true 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.040 [2024-11-02 23:50:18.847771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:25.040 [2024-11-02 23:50:18.847818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.040 [2024-11-02 23:50:18.847836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:25.040 [2024-11-02 23:50:18.847854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.040 [2024-11-02 23:50:18.850022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.040 [2024-11-02 23:50:18.850134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:25.040 BaseBdev1 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.040 BaseBdev2_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.040 true 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.040 [2024-11-02 23:50:18.888369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:25.040 [2024-11-02 23:50:18.888423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.040 [2024-11-02 23:50:18.888442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:25.040 [2024-11-02 23:50:18.888460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.040 [2024-11-02 23:50:18.890637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.040 [2024-11-02 23:50:18.890690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:25.040 BaseBdev2 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.040 BaseBdev3_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.040 true 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.040 [2024-11-02 23:50:18.928899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:25.040 [2024-11-02 23:50:18.928993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.040 [2024-11-02 23:50:18.929041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:25.040 [2024-11-02 23:50:18.929075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.040 [2024-11-02 23:50:18.931207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.040 [2024-11-02 23:50:18.931283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:25.040 BaseBdev3 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.040 BaseBdev4_malloc 00:10:25.040 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.041 true 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.041 [2024-11-02 23:50:18.978060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:25.041 [2024-11-02 23:50:18.978171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.041 [2024-11-02 23:50:18.978202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:25.041 [2024-11-02 23:50:18.978211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.041 [2024-11-02 23:50:18.980387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.041 [2024-11-02 23:50:18.980425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:25.041 BaseBdev4 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.041 [2024-11-02 23:50:18.990090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.041 [2024-11-02 23:50:18.991953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.041 [2024-11-02 23:50:18.992028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.041 [2024-11-02 23:50:18.992091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:25.041 [2024-11-02 23:50:18.992290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:25.041 [2024-11-02 23:50:18.992303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:25.041 [2024-11-02 23:50:18.992569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:25.041 [2024-11-02 23:50:18.992700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:25.041 [2024-11-02 23:50:18.992712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:25.041 [2024-11-02 23:50:18.992865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.041 23:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.041 23:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.041 23:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.041 23:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.041 23:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.041 23:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.041 "name": "raid_bdev1", 00:10:25.041 "uuid": "b3b3c0b1-3a37-4178-9a4a-5ea99c4962a6", 00:10:25.041 "strip_size_kb": 0, 00:10:25.041 "state": "online", 00:10:25.041 "raid_level": "raid1", 00:10:25.041 "superblock": true, 00:10:25.041 "num_base_bdevs": 4, 00:10:25.041 "num_base_bdevs_discovered": 4, 00:10:25.041 "num_base_bdevs_operational": 4, 00:10:25.041 "base_bdevs_list": [ 00:10:25.041 { 00:10:25.041 "name": "BaseBdev1", 00:10:25.041 "uuid": "d3f75219-4cd4-504c-81b0-4956e9bf04ce", 00:10:25.041 "is_configured": true, 00:10:25.041 "data_offset": 2048, 00:10:25.041 "data_size": 63488 00:10:25.041 }, 00:10:25.041 { 00:10:25.041 "name": "BaseBdev2", 00:10:25.041 "uuid": "04c9aa5d-6f69-57e4-83b7-5884d0403dbd", 00:10:25.041 "is_configured": true, 00:10:25.041 "data_offset": 2048, 00:10:25.041 "data_size": 63488 00:10:25.041 }, 00:10:25.041 { 00:10:25.041 "name": "BaseBdev3", 00:10:25.041 "uuid": "94bbd08c-9e62-5231-a9f1-311d6c729d93", 00:10:25.041 "is_configured": true, 00:10:25.041 "data_offset": 2048, 00:10:25.041 "data_size": 63488 00:10:25.041 }, 00:10:25.041 { 00:10:25.041 "name": "BaseBdev4", 00:10:25.041 "uuid": "4325efd6-0d60-5ce7-82ff-0005ae3d4d9a", 00:10:25.041 "is_configured": true, 00:10:25.041 "data_offset": 2048, 00:10:25.041 "data_size": 63488 00:10:25.041 } 00:10:25.041 ] 00:10:25.041 }' 00:10:25.041 23:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.041 23:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.610 23:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.610 23:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:25.610 [2024-11-02 23:50:19.569500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.549 "name": "raid_bdev1", 00:10:26.549 "uuid": "b3b3c0b1-3a37-4178-9a4a-5ea99c4962a6", 00:10:26.549 "strip_size_kb": 0, 00:10:26.549 "state": "online", 00:10:26.549 "raid_level": "raid1", 00:10:26.549 "superblock": true, 00:10:26.549 "num_base_bdevs": 4, 00:10:26.549 "num_base_bdevs_discovered": 4, 00:10:26.549 "num_base_bdevs_operational": 4, 00:10:26.549 "base_bdevs_list": [ 00:10:26.549 { 00:10:26.549 "name": "BaseBdev1", 00:10:26.549 "uuid": "d3f75219-4cd4-504c-81b0-4956e9bf04ce", 00:10:26.549 "is_configured": true, 00:10:26.549 "data_offset": 2048, 00:10:26.549 "data_size": 63488 00:10:26.549 }, 00:10:26.549 { 00:10:26.549 "name": "BaseBdev2", 00:10:26.549 "uuid": "04c9aa5d-6f69-57e4-83b7-5884d0403dbd", 00:10:26.549 "is_configured": true, 00:10:26.549 "data_offset": 2048, 00:10:26.549 "data_size": 63488 00:10:26.549 }, 00:10:26.549 { 00:10:26.549 "name": "BaseBdev3", 00:10:26.549 "uuid": "94bbd08c-9e62-5231-a9f1-311d6c729d93", 00:10:26.549 "is_configured": true, 00:10:26.549 "data_offset": 2048, 00:10:26.549 "data_size": 63488 00:10:26.549 }, 00:10:26.549 { 00:10:26.549 "name": "BaseBdev4", 00:10:26.549 "uuid": "4325efd6-0d60-5ce7-82ff-0005ae3d4d9a", 00:10:26.549 "is_configured": true, 00:10:26.549 "data_offset": 2048, 00:10:26.549 "data_size": 63488 00:10:26.549 } 00:10:26.549 ] 00:10:26.549 }' 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.549 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.119 [2024-11-02 23:50:20.948728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.119 [2024-11-02 23:50:20.948838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.119 [2024-11-02 23:50:20.951344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.119 [2024-11-02 23:50:20.951433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.119 [2024-11-02 23:50:20.951574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.119 [2024-11-02 23:50:20.951629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:27.119 { 00:10:27.119 "results": [ 00:10:27.119 { 00:10:27.119 "job": "raid_bdev1", 00:10:27.119 "core_mask": "0x1", 00:10:27.119 "workload": "randrw", 00:10:27.119 "percentage": 50, 00:10:27.119 "status": "finished", 00:10:27.119 "queue_depth": 1, 00:10:27.119 "io_size": 131072, 00:10:27.119 "runtime": 1.380082, 00:10:27.119 "iops": 11588.441846209138, 00:10:27.119 "mibps": 1448.5552307761423, 00:10:27.119 "io_failed": 0, 00:10:27.119 "io_timeout": 0, 00:10:27.119 "avg_latency_us": 83.73531717069449, 00:10:27.119 "min_latency_us": 22.022707423580787, 00:10:27.119 "max_latency_us": 1516.7720524017468 00:10:27.119 } 00:10:27.119 ], 00:10:27.119 "core_count": 1 00:10:27.119 } 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85578 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 85578 ']' 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 85578 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85578 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:27.119 killing process with pid 85578 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85578' 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 85578 00:10:27.119 [2024-11-02 23:50:20.990768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.119 23:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 85578 00:10:27.119 [2024-11-02 23:50:21.025485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.379 23:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HjoCX8NjBS 00:10:27.379 23:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:27.379 23:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:27.379 23:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:27.379 23:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:27.379 23:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.379 23:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:27.379 23:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:27.379 00:10:27.379 real 0m3.371s 00:10:27.379 user 0m4.306s 00:10:27.379 sys 0m0.554s 00:10:27.379 23:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:27.379 ************************************ 00:10:27.379 END TEST raid_read_error_test 00:10:27.379 ************************************ 00:10:27.379 23:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.379 23:50:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:27.379 23:50:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:27.379 23:50:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:27.379 23:50:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.379 ************************************ 00:10:27.379 START TEST raid_write_error_test 00:10:27.379 ************************************ 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0Ou9uQbFDu 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85716 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85716 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 85716 ']' 00:10:27.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:27.379 23:50:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.379 [2024-11-02 23:50:21.415284] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:27.379 [2024-11-02 23:50:21.415412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85716 ] 00:10:27.639 [2024-11-02 23:50:21.568273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.639 [2024-11-02 23:50:21.594916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.639 [2024-11-02 23:50:21.636631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.639 [2024-11-02 23:50:21.636664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.207 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.208 BaseBdev1_malloc 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.208 true 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.208 [2024-11-02 23:50:22.270010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:28.208 [2024-11-02 23:50:22.270164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.208 [2024-11-02 23:50:22.270189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:28.208 [2024-11-02 23:50:22.270198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.208 [2024-11-02 23:50:22.272250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.208 [2024-11-02 23:50:22.272291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:28.208 BaseBdev1 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.208 BaseBdev2_malloc 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.208 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.468 true 00:10:28.468 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.468 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:28.468 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.468 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.468 [2024-11-02 23:50:22.306486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:28.468 [2024-11-02 23:50:22.306542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.468 [2024-11-02 23:50:22.306559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:28.468 [2024-11-02 23:50:22.306575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.468 [2024-11-02 23:50:22.308552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.468 [2024-11-02 23:50:22.308589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:28.468 BaseBdev2 00:10:28.468 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.468 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.468 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:28.468 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.468 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.468 BaseBdev3_malloc 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.469 true 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.469 [2024-11-02 23:50:22.346851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:28.469 [2024-11-02 23:50:22.346903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.469 [2024-11-02 23:50:22.346922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:28.469 [2024-11-02 23:50:22.346931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.469 [2024-11-02 23:50:22.349101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.469 [2024-11-02 23:50:22.349140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:28.469 BaseBdev3 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.469 BaseBdev4_malloc 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.469 true 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.469 [2024-11-02 23:50:22.397985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:28.469 [2024-11-02 23:50:22.398137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.469 [2024-11-02 23:50:22.398165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:28.469 [2024-11-02 23:50:22.398175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.469 [2024-11-02 23:50:22.400365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.469 [2024-11-02 23:50:22.400403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:28.469 BaseBdev4 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.469 [2024-11-02 23:50:22.410001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.469 [2024-11-02 23:50:22.411837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.469 [2024-11-02 23:50:22.411909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.469 [2024-11-02 23:50:22.411968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.469 [2024-11-02 23:50:22.412168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:28.469 [2024-11-02 23:50:22.412180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.469 [2024-11-02 23:50:22.412418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:28.469 [2024-11-02 23:50:22.412562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:28.469 [2024-11-02 23:50:22.412574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:28.469 [2024-11-02 23:50:22.412701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.469 "name": "raid_bdev1", 00:10:28.469 "uuid": "752aac89-b97f-4066-ae71-5bcdb27453aa", 00:10:28.469 "strip_size_kb": 0, 00:10:28.469 "state": "online", 00:10:28.469 "raid_level": "raid1", 00:10:28.469 "superblock": true, 00:10:28.469 "num_base_bdevs": 4, 00:10:28.469 "num_base_bdevs_discovered": 4, 00:10:28.469 "num_base_bdevs_operational": 4, 00:10:28.469 "base_bdevs_list": [ 00:10:28.469 { 00:10:28.469 "name": "BaseBdev1", 00:10:28.469 "uuid": "724f7dd6-26e5-5dad-88a2-dac5530e8861", 00:10:28.469 "is_configured": true, 00:10:28.469 "data_offset": 2048, 00:10:28.469 "data_size": 63488 00:10:28.469 }, 00:10:28.469 { 00:10:28.469 "name": "BaseBdev2", 00:10:28.469 "uuid": "374f6fa1-391e-5575-9b0f-1ff4b22e6d7f", 00:10:28.469 "is_configured": true, 00:10:28.469 "data_offset": 2048, 00:10:28.469 "data_size": 63488 00:10:28.469 }, 00:10:28.469 { 00:10:28.469 "name": "BaseBdev3", 00:10:28.469 "uuid": "2be43027-d43a-5129-aefa-87ec3ddbfe54", 00:10:28.469 "is_configured": true, 00:10:28.469 "data_offset": 2048, 00:10:28.469 "data_size": 63488 00:10:28.469 }, 00:10:28.469 { 00:10:28.469 "name": "BaseBdev4", 00:10:28.469 "uuid": "07456362-046a-55f6-9c51-588ac9f8f6b4", 00:10:28.469 "is_configured": true, 00:10:28.469 "data_offset": 2048, 00:10:28.469 "data_size": 63488 00:10:28.469 } 00:10:28.469 ] 00:10:28.469 }' 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.469 23:50:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.038 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:29.038 23:50:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:29.038 [2024-11-02 23:50:22.941463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:29.976 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:29.976 23:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.976 23:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.977 [2024-11-02 23:50:23.864296] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:29.977 [2024-11-02 23:50:23.864367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.977 [2024-11-02 23:50:23.864598] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.977 "name": "raid_bdev1", 00:10:29.977 "uuid": "752aac89-b97f-4066-ae71-5bcdb27453aa", 00:10:29.977 "strip_size_kb": 0, 00:10:29.977 "state": "online", 00:10:29.977 "raid_level": "raid1", 00:10:29.977 "superblock": true, 00:10:29.977 "num_base_bdevs": 4, 00:10:29.977 "num_base_bdevs_discovered": 3, 00:10:29.977 "num_base_bdevs_operational": 3, 00:10:29.977 "base_bdevs_list": [ 00:10:29.977 { 00:10:29.977 "name": null, 00:10:29.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.977 "is_configured": false, 00:10:29.977 "data_offset": 0, 00:10:29.977 "data_size": 63488 00:10:29.977 }, 00:10:29.977 { 00:10:29.977 "name": "BaseBdev2", 00:10:29.977 "uuid": "374f6fa1-391e-5575-9b0f-1ff4b22e6d7f", 00:10:29.977 "is_configured": true, 00:10:29.977 "data_offset": 2048, 00:10:29.977 "data_size": 63488 00:10:29.977 }, 00:10:29.977 { 00:10:29.977 "name": "BaseBdev3", 00:10:29.977 "uuid": "2be43027-d43a-5129-aefa-87ec3ddbfe54", 00:10:29.977 "is_configured": true, 00:10:29.977 "data_offset": 2048, 00:10:29.977 "data_size": 63488 00:10:29.977 }, 00:10:29.977 { 00:10:29.977 "name": "BaseBdev4", 00:10:29.977 "uuid": "07456362-046a-55f6-9c51-588ac9f8f6b4", 00:10:29.977 "is_configured": true, 00:10:29.977 "data_offset": 2048, 00:10:29.977 "data_size": 63488 00:10:29.977 } 00:10:29.977 ] 00:10:29.977 }' 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.977 23:50:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.236 23:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.498 [2024-11-02 23:50:24.336032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.498 [2024-11-02 23:50:24.336157] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.498 [2024-11-02 23:50:24.338646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.498 [2024-11-02 23:50:24.338755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.498 [2024-11-02 23:50:24.338879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.498 [2024-11-02 23:50:24.338928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:30.498 { 00:10:30.498 "results": [ 00:10:30.498 { 00:10:30.498 "job": "raid_bdev1", 00:10:30.498 "core_mask": "0x1", 00:10:30.498 "workload": "randrw", 00:10:30.498 "percentage": 50, 00:10:30.498 "status": "finished", 00:10:30.498 "queue_depth": 1, 00:10:30.498 "io_size": 131072, 00:10:30.498 "runtime": 1.395552, 00:10:30.498 "iops": 12605.048038338951, 00:10:30.498 "mibps": 1575.631004792369, 00:10:30.498 "io_failed": 0, 00:10:30.498 "io_timeout": 0, 00:10:30.498 "avg_latency_us": 76.78748069613803, 00:10:30.498 "min_latency_us": 22.358078602620086, 00:10:30.498 "max_latency_us": 1459.5353711790392 00:10:30.498 } 00:10:30.498 ], 00:10:30.498 "core_count": 1 00:10:30.498 } 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85716 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 85716 ']' 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 85716 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85716 00:10:30.498 killing process with pid 85716 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85716' 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 85716 00:10:30.498 [2024-11-02 23:50:24.386018] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.498 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 85716 00:10:30.498 [2024-11-02 23:50:24.421865] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.786 23:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0Ou9uQbFDu 00:10:30.786 23:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:30.786 23:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:30.786 23:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:30.786 23:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:30.786 23:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.786 23:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:30.786 ************************************ 00:10:30.786 END TEST raid_write_error_test 00:10:30.786 ************************************ 00:10:30.786 23:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:30.786 00:10:30.786 real 0m3.325s 00:10:30.786 user 0m4.220s 00:10:30.786 sys 0m0.547s 00:10:30.786 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:30.786 23:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.786 23:50:24 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:30.786 23:50:24 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:30.786 23:50:24 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:30.786 23:50:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:10:30.786 23:50:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:30.786 23:50:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.786 ************************************ 00:10:30.786 START TEST raid_rebuild_test 00:10:30.786 ************************************ 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:30.786 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85843 00:10:30.787 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:30.787 23:50:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85843 00:10:30.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.787 23:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 85843 ']' 00:10:30.787 23:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.787 23:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:30.787 23:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.787 23:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:30.787 23:50:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.787 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:30.787 Zero copy mechanism will not be used. 00:10:30.787 [2024-11-02 23:50:24.810086] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:30.787 [2024-11-02 23:50:24.810194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85843 ] 00:10:31.046 [2024-11-02 23:50:24.966868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.046 [2024-11-02 23:50:24.993517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.046 [2024-11-02 23:50:25.035017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.046 [2024-11-02 23:50:25.035053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.616 BaseBdev1_malloc 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.616 [2024-11-02 23:50:25.660408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:31.616 [2024-11-02 23:50:25.660477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.616 [2024-11-02 23:50:25.660502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:31.616 [2024-11-02 23:50:25.660517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.616 [2024-11-02 23:50:25.662653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.616 [2024-11-02 23:50:25.662758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:31.616 BaseBdev1 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.616 BaseBdev2_malloc 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.616 [2024-11-02 23:50:25.688825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:31.616 [2024-11-02 23:50:25.688943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.616 [2024-11-02 23:50:25.688967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:31.616 [2024-11-02 23:50:25.688977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.616 [2024-11-02 23:50:25.691110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.616 [2024-11-02 23:50:25.691152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:31.616 BaseBdev2 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.616 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.877 spare_malloc 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.877 spare_delay 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.877 [2024-11-02 23:50:25.729177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:31.877 [2024-11-02 23:50:25.729235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.877 [2024-11-02 23:50:25.729255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:31.877 [2024-11-02 23:50:25.729264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.877 [2024-11-02 23:50:25.731306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.877 [2024-11-02 23:50:25.731419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:31.877 spare 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.877 [2024-11-02 23:50:25.741193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.877 [2024-11-02 23:50:25.742961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.877 [2024-11-02 23:50:25.743053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:31.877 [2024-11-02 23:50:25.743065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:31.877 [2024-11-02 23:50:25.743316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:31.877 [2024-11-02 23:50:25.743448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:31.877 [2024-11-02 23:50:25.743465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:31.877 [2024-11-02 23:50:25.743577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.877 "name": "raid_bdev1", 00:10:31.877 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:31.877 "strip_size_kb": 0, 00:10:31.877 "state": "online", 00:10:31.877 "raid_level": "raid1", 00:10:31.877 "superblock": false, 00:10:31.877 "num_base_bdevs": 2, 00:10:31.877 "num_base_bdevs_discovered": 2, 00:10:31.877 "num_base_bdevs_operational": 2, 00:10:31.877 "base_bdevs_list": [ 00:10:31.877 { 00:10:31.877 "name": "BaseBdev1", 00:10:31.877 "uuid": "f0137db3-252d-5400-9715-25ac78a21d93", 00:10:31.877 "is_configured": true, 00:10:31.877 "data_offset": 0, 00:10:31.877 "data_size": 65536 00:10:31.877 }, 00:10:31.877 { 00:10:31.877 "name": "BaseBdev2", 00:10:31.877 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:31.877 "is_configured": true, 00:10:31.877 "data_offset": 0, 00:10:31.877 "data_size": 65536 00:10:31.877 } 00:10:31.877 ] 00:10:31.877 }' 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.877 23:50:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.137 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:32.137 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:32.137 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.137 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.137 [2024-11-02 23:50:26.220647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:32.398 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:32.398 [2024-11-02 23:50:26.488012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:32.658 /dev/nbd0 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:32.658 1+0 records in 00:10:32.658 1+0 records out 00:10:32.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211809 s, 19.3 MB/s 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:32.658 23:50:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:36.851 65536+0 records in 00:10:36.851 65536+0 records out 00:10:36.851 33554432 bytes (34 MB, 32 MiB) copied, 3.65178 s, 9.2 MB/s 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:36.851 [2024-11-02 23:50:30.388794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.851 [2024-11-02 23:50:30.430769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.851 "name": "raid_bdev1", 00:10:36.851 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:36.851 "strip_size_kb": 0, 00:10:36.851 "state": "online", 00:10:36.851 "raid_level": "raid1", 00:10:36.851 "superblock": false, 00:10:36.851 "num_base_bdevs": 2, 00:10:36.851 "num_base_bdevs_discovered": 1, 00:10:36.851 "num_base_bdevs_operational": 1, 00:10:36.851 "base_bdevs_list": [ 00:10:36.851 { 00:10:36.851 "name": null, 00:10:36.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.851 "is_configured": false, 00:10:36.851 "data_offset": 0, 00:10:36.851 "data_size": 65536 00:10:36.851 }, 00:10:36.851 { 00:10:36.851 "name": "BaseBdev2", 00:10:36.851 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:36.851 "is_configured": true, 00:10:36.851 "data_offset": 0, 00:10:36.851 "data_size": 65536 00:10:36.851 } 00:10:36.851 ] 00:10:36.851 }' 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.851 [2024-11-02 23:50:30.905956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:36.851 [2024-11-02 23:50:30.918622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.851 23:50:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:36.851 [2024-11-02 23:50:30.921175] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:38.230 "name": "raid_bdev1", 00:10:38.230 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:38.230 "strip_size_kb": 0, 00:10:38.230 "state": "online", 00:10:38.230 "raid_level": "raid1", 00:10:38.230 "superblock": false, 00:10:38.230 "num_base_bdevs": 2, 00:10:38.230 "num_base_bdevs_discovered": 2, 00:10:38.230 "num_base_bdevs_operational": 2, 00:10:38.230 "process": { 00:10:38.230 "type": "rebuild", 00:10:38.230 "target": "spare", 00:10:38.230 "progress": { 00:10:38.230 "blocks": 20480, 00:10:38.230 "percent": 31 00:10:38.230 } 00:10:38.230 }, 00:10:38.230 "base_bdevs_list": [ 00:10:38.230 { 00:10:38.230 "name": "spare", 00:10:38.230 "uuid": "c2057085-0b97-5b49-9437-7d76ebac8314", 00:10:38.230 "is_configured": true, 00:10:38.230 "data_offset": 0, 00:10:38.230 "data_size": 65536 00:10:38.230 }, 00:10:38.230 { 00:10:38.230 "name": "BaseBdev2", 00:10:38.230 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:38.230 "is_configured": true, 00:10:38.230 "data_offset": 0, 00:10:38.230 "data_size": 65536 00:10:38.230 } 00:10:38.230 ] 00:10:38.230 }' 00:10:38.230 23:50:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.230 [2024-11-02 23:50:32.068638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:38.230 [2024-11-02 23:50:32.126163] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:38.230 [2024-11-02 23:50:32.126336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.230 [2024-11-02 23:50:32.126360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:38.230 [2024-11-02 23:50:32.126368] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.230 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.230 "name": "raid_bdev1", 00:10:38.230 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:38.230 "strip_size_kb": 0, 00:10:38.230 "state": "online", 00:10:38.230 "raid_level": "raid1", 00:10:38.230 "superblock": false, 00:10:38.230 "num_base_bdevs": 2, 00:10:38.230 "num_base_bdevs_discovered": 1, 00:10:38.230 "num_base_bdevs_operational": 1, 00:10:38.230 "base_bdevs_list": [ 00:10:38.230 { 00:10:38.230 "name": null, 00:10:38.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.230 "is_configured": false, 00:10:38.230 "data_offset": 0, 00:10:38.230 "data_size": 65536 00:10:38.230 }, 00:10:38.230 { 00:10:38.230 "name": "BaseBdev2", 00:10:38.230 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:38.231 "is_configured": true, 00:10:38.231 "data_offset": 0, 00:10:38.231 "data_size": 65536 00:10:38.231 } 00:10:38.231 ] 00:10:38.231 }' 00:10:38.231 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.231 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.490 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:38.490 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:38.490 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:38.490 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:38.490 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:38.490 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.490 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:38.749 "name": "raid_bdev1", 00:10:38.749 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:38.749 "strip_size_kb": 0, 00:10:38.749 "state": "online", 00:10:38.749 "raid_level": "raid1", 00:10:38.749 "superblock": false, 00:10:38.749 "num_base_bdevs": 2, 00:10:38.749 "num_base_bdevs_discovered": 1, 00:10:38.749 "num_base_bdevs_operational": 1, 00:10:38.749 "base_bdevs_list": [ 00:10:38.749 { 00:10:38.749 "name": null, 00:10:38.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.749 "is_configured": false, 00:10:38.749 "data_offset": 0, 00:10:38.749 "data_size": 65536 00:10:38.749 }, 00:10:38.749 { 00:10:38.749 "name": "BaseBdev2", 00:10:38.749 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:38.749 "is_configured": true, 00:10:38.749 "data_offset": 0, 00:10:38.749 "data_size": 65536 00:10:38.749 } 00:10:38.749 ] 00:10:38.749 }' 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.749 [2024-11-02 23:50:32.730389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:38.749 [2024-11-02 23:50:32.735284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.749 23:50:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:38.749 [2024-11-02 23:50:32.737195] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:39.709 "name": "raid_bdev1", 00:10:39.709 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:39.709 "strip_size_kb": 0, 00:10:39.709 "state": "online", 00:10:39.709 "raid_level": "raid1", 00:10:39.709 "superblock": false, 00:10:39.709 "num_base_bdevs": 2, 00:10:39.709 "num_base_bdevs_discovered": 2, 00:10:39.709 "num_base_bdevs_operational": 2, 00:10:39.709 "process": { 00:10:39.709 "type": "rebuild", 00:10:39.709 "target": "spare", 00:10:39.709 "progress": { 00:10:39.709 "blocks": 20480, 00:10:39.709 "percent": 31 00:10:39.709 } 00:10:39.709 }, 00:10:39.709 "base_bdevs_list": [ 00:10:39.709 { 00:10:39.709 "name": "spare", 00:10:39.709 "uuid": "c2057085-0b97-5b49-9437-7d76ebac8314", 00:10:39.709 "is_configured": true, 00:10:39.709 "data_offset": 0, 00:10:39.709 "data_size": 65536 00:10:39.709 }, 00:10:39.709 { 00:10:39.709 "name": "BaseBdev2", 00:10:39.709 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:39.709 "is_configured": true, 00:10:39.709 "data_offset": 0, 00:10:39.709 "data_size": 65536 00:10:39.709 } 00:10:39.709 ] 00:10:39.709 }' 00:10:39.709 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=292 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:39.969 "name": "raid_bdev1", 00:10:39.969 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:39.969 "strip_size_kb": 0, 00:10:39.969 "state": "online", 00:10:39.969 "raid_level": "raid1", 00:10:39.969 "superblock": false, 00:10:39.969 "num_base_bdevs": 2, 00:10:39.969 "num_base_bdevs_discovered": 2, 00:10:39.969 "num_base_bdevs_operational": 2, 00:10:39.969 "process": { 00:10:39.969 "type": "rebuild", 00:10:39.969 "target": "spare", 00:10:39.969 "progress": { 00:10:39.969 "blocks": 22528, 00:10:39.969 "percent": 34 00:10:39.969 } 00:10:39.969 }, 00:10:39.969 "base_bdevs_list": [ 00:10:39.969 { 00:10:39.969 "name": "spare", 00:10:39.969 "uuid": "c2057085-0b97-5b49-9437-7d76ebac8314", 00:10:39.969 "is_configured": true, 00:10:39.969 "data_offset": 0, 00:10:39.969 "data_size": 65536 00:10:39.969 }, 00:10:39.969 { 00:10:39.969 "name": "BaseBdev2", 00:10:39.969 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:39.969 "is_configured": true, 00:10:39.969 "data_offset": 0, 00:10:39.969 "data_size": 65536 00:10:39.969 } 00:10:39.969 ] 00:10:39.969 }' 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:39.969 23:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:39.969 23:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:39.969 23:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.349 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:41.349 "name": "raid_bdev1", 00:10:41.349 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:41.349 "strip_size_kb": 0, 00:10:41.349 "state": "online", 00:10:41.349 "raid_level": "raid1", 00:10:41.349 "superblock": false, 00:10:41.349 "num_base_bdevs": 2, 00:10:41.349 "num_base_bdevs_discovered": 2, 00:10:41.349 "num_base_bdevs_operational": 2, 00:10:41.349 "process": { 00:10:41.349 "type": "rebuild", 00:10:41.349 "target": "spare", 00:10:41.350 "progress": { 00:10:41.350 "blocks": 45056, 00:10:41.350 "percent": 68 00:10:41.350 } 00:10:41.350 }, 00:10:41.350 "base_bdevs_list": [ 00:10:41.350 { 00:10:41.350 "name": "spare", 00:10:41.350 "uuid": "c2057085-0b97-5b49-9437-7d76ebac8314", 00:10:41.350 "is_configured": true, 00:10:41.350 "data_offset": 0, 00:10:41.350 "data_size": 65536 00:10:41.350 }, 00:10:41.350 { 00:10:41.350 "name": "BaseBdev2", 00:10:41.350 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:41.350 "is_configured": true, 00:10:41.350 "data_offset": 0, 00:10:41.350 "data_size": 65536 00:10:41.350 } 00:10:41.350 ] 00:10:41.350 }' 00:10:41.350 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:41.350 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:41.350 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:41.350 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:41.350 23:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:41.918 [2024-11-02 23:50:35.948501] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:41.918 [2024-11-02 23:50:35.948595] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:41.918 [2024-11-02 23:50:35.948640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:42.178 "name": "raid_bdev1", 00:10:42.178 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:42.178 "strip_size_kb": 0, 00:10:42.178 "state": "online", 00:10:42.178 "raid_level": "raid1", 00:10:42.178 "superblock": false, 00:10:42.178 "num_base_bdevs": 2, 00:10:42.178 "num_base_bdevs_discovered": 2, 00:10:42.178 "num_base_bdevs_operational": 2, 00:10:42.178 "base_bdevs_list": [ 00:10:42.178 { 00:10:42.178 "name": "spare", 00:10:42.178 "uuid": "c2057085-0b97-5b49-9437-7d76ebac8314", 00:10:42.178 "is_configured": true, 00:10:42.178 "data_offset": 0, 00:10:42.178 "data_size": 65536 00:10:42.178 }, 00:10:42.178 { 00:10:42.178 "name": "BaseBdev2", 00:10:42.178 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:42.178 "is_configured": true, 00:10:42.178 "data_offset": 0, 00:10:42.178 "data_size": 65536 00:10:42.178 } 00:10:42.178 ] 00:10:42.178 }' 00:10:42.178 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:42.438 "name": "raid_bdev1", 00:10:42.438 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:42.438 "strip_size_kb": 0, 00:10:42.438 "state": "online", 00:10:42.438 "raid_level": "raid1", 00:10:42.438 "superblock": false, 00:10:42.438 "num_base_bdevs": 2, 00:10:42.438 "num_base_bdevs_discovered": 2, 00:10:42.438 "num_base_bdevs_operational": 2, 00:10:42.438 "base_bdevs_list": [ 00:10:42.438 { 00:10:42.438 "name": "spare", 00:10:42.438 "uuid": "c2057085-0b97-5b49-9437-7d76ebac8314", 00:10:42.438 "is_configured": true, 00:10:42.438 "data_offset": 0, 00:10:42.438 "data_size": 65536 00:10:42.438 }, 00:10:42.438 { 00:10:42.438 "name": "BaseBdev2", 00:10:42.438 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:42.438 "is_configured": true, 00:10:42.438 "data_offset": 0, 00:10:42.438 "data_size": 65536 00:10:42.438 } 00:10:42.438 ] 00:10:42.438 }' 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.438 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.439 "name": "raid_bdev1", 00:10:42.439 "uuid": "be073e3d-c628-4ca5-b99b-69f6eaeebef6", 00:10:42.439 "strip_size_kb": 0, 00:10:42.439 "state": "online", 00:10:42.439 "raid_level": "raid1", 00:10:42.439 "superblock": false, 00:10:42.439 "num_base_bdevs": 2, 00:10:42.439 "num_base_bdevs_discovered": 2, 00:10:42.439 "num_base_bdevs_operational": 2, 00:10:42.439 "base_bdevs_list": [ 00:10:42.439 { 00:10:42.439 "name": "spare", 00:10:42.439 "uuid": "c2057085-0b97-5b49-9437-7d76ebac8314", 00:10:42.439 "is_configured": true, 00:10:42.439 "data_offset": 0, 00:10:42.439 "data_size": 65536 00:10:42.439 }, 00:10:42.439 { 00:10:42.439 "name": "BaseBdev2", 00:10:42.439 "uuid": "03cffc0e-f4ad-539f-8327-c596e3720025", 00:10:42.439 "is_configured": true, 00:10:42.439 "data_offset": 0, 00:10:42.439 "data_size": 65536 00:10:42.439 } 00:10:42.439 ] 00:10:42.439 }' 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.439 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.007 [2024-11-02 23:50:36.891800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.007 [2024-11-02 23:50:36.891831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.007 [2024-11-02 23:50:36.891915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.007 [2024-11-02 23:50:36.892006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.007 [2024-11-02 23:50:36.892040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.007 23:50:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:43.267 /dev/nbd0 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.267 1+0 records in 00:10:43.267 1+0 records out 00:10:43.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468654 s, 8.7 MB/s 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.267 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:43.527 /dev/nbd1 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.527 1+0 records in 00:10:43.527 1+0 records out 00:10:43.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029708 s, 13.8 MB/s 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.527 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:43.787 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:43.787 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:43.787 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:43.787 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.787 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.787 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:43.787 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:43.787 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.787 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.787 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85843 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 85843 ']' 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 85843 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:44.056 23:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85843 00:10:44.056 killing process with pid 85843 00:10:44.056 Received shutdown signal, test time was about 60.000000 seconds 00:10:44.056 00:10:44.056 Latency(us) 00:10:44.056 [2024-11-02T23:50:38.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.056 [2024-11-02T23:50:38.151Z] =================================================================================================================== 00:10:44.056 [2024-11-02T23:50:38.151Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:44.056 23:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:44.056 23:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:44.056 23:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85843' 00:10:44.056 23:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 85843 00:10:44.056 [2024-11-02 23:50:38.005168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.056 23:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 85843 00:10:44.056 [2024-11-02 23:50:38.035547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:10:44.332 00:10:44.332 real 0m13.525s 00:10:44.332 user 0m15.768s 00:10:44.332 sys 0m2.895s 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.332 ************************************ 00:10:44.332 END TEST raid_rebuild_test 00:10:44.332 ************************************ 00:10:44.332 23:50:38 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:10:44.332 23:50:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:10:44.332 23:50:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:44.332 23:50:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.332 ************************************ 00:10:44.332 START TEST raid_rebuild_test_sb 00:10:44.332 ************************************ 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86238 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86238 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 86238 ']' 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:44.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:44.332 23:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.332 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:44.332 Zero copy mechanism will not be used. 00:10:44.332 [2024-11-02 23:50:38.405148] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:10:44.332 [2024-11-02 23:50:38.405269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86238 ] 00:10:44.591 [2024-11-02 23:50:38.559994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.592 [2024-11-02 23:50:38.586558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.592 [2024-11-02 23:50:38.627845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.592 [2024-11-02 23:50:38.627881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.541 BaseBdev1_malloc 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.541 [2024-11-02 23:50:39.285184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:45.541 [2024-11-02 23:50:39.285254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.541 [2024-11-02 23:50:39.285289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:45.541 [2024-11-02 23:50:39.285302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.541 [2024-11-02 23:50:39.287357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.541 [2024-11-02 23:50:39.287389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:45.541 BaseBdev1 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.541 BaseBdev2_malloc 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.541 [2024-11-02 23:50:39.313494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:45.541 [2024-11-02 23:50:39.313535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.541 [2024-11-02 23:50:39.313554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:45.541 [2024-11-02 23:50:39.313562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.541 [2024-11-02 23:50:39.315657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.541 [2024-11-02 23:50:39.315696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:45.541 BaseBdev2 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.541 spare_malloc 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.541 spare_delay 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.541 [2024-11-02 23:50:39.353799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:45.541 [2024-11-02 23:50:39.353844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.541 [2024-11-02 23:50:39.353864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:45.541 [2024-11-02 23:50:39.353872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.541 [2024-11-02 23:50:39.355916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.541 [2024-11-02 23:50:39.355945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:45.541 spare 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.541 [2024-11-02 23:50:39.365829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.541 [2024-11-02 23:50:39.367616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.541 [2024-11-02 23:50:39.367826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:45.541 [2024-11-02 23:50:39.367838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:45.541 [2024-11-02 23:50:39.368085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:45.541 [2024-11-02 23:50:39.368217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:45.541 [2024-11-02 23:50:39.368233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:45.541 [2024-11-02 23:50:39.368333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.541 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.542 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.542 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.542 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.542 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.542 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.542 "name": "raid_bdev1", 00:10:45.542 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:45.542 "strip_size_kb": 0, 00:10:45.542 "state": "online", 00:10:45.542 "raid_level": "raid1", 00:10:45.542 "superblock": true, 00:10:45.542 "num_base_bdevs": 2, 00:10:45.542 "num_base_bdevs_discovered": 2, 00:10:45.542 "num_base_bdevs_operational": 2, 00:10:45.542 "base_bdevs_list": [ 00:10:45.542 { 00:10:45.542 "name": "BaseBdev1", 00:10:45.542 "uuid": "fa29d763-d150-58f2-96f8-1c99a8850aac", 00:10:45.542 "is_configured": true, 00:10:45.542 "data_offset": 2048, 00:10:45.542 "data_size": 63488 00:10:45.542 }, 00:10:45.542 { 00:10:45.542 "name": "BaseBdev2", 00:10:45.542 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:45.542 "is_configured": true, 00:10:45.542 "data_offset": 2048, 00:10:45.542 "data_size": 63488 00:10:45.542 } 00:10:45.542 ] 00:10:45.542 }' 00:10:45.542 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.542 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:45.801 [2024-11-02 23:50:39.817289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:45.801 23:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:46.060 23:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:46.060 [2024-11-02 23:50:40.100608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:46.060 /dev/nbd0 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:46.060 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:46.060 1+0 records in 00:10:46.060 1+0 records out 00:10:46.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301648 s, 13.6 MB/s 00:10:46.320 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.320 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:10:46.320 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.320 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:46.320 23:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:10:46.320 23:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:46.320 23:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:46.320 23:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:46.320 23:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:46.320 23:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:10:50.538 63488+0 records in 00:10:50.538 63488+0 records out 00:10:50.538 32505856 bytes (33 MB, 31 MiB) copied, 4.15045 s, 7.8 MB/s 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:50.538 [2024-11-02 23:50:44.511843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.538 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.539 [2024-11-02 23:50:44.543889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.539 "name": "raid_bdev1", 00:10:50.539 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:50.539 "strip_size_kb": 0, 00:10:50.539 "state": "online", 00:10:50.539 "raid_level": "raid1", 00:10:50.539 "superblock": true, 00:10:50.539 "num_base_bdevs": 2, 00:10:50.539 "num_base_bdevs_discovered": 1, 00:10:50.539 "num_base_bdevs_operational": 1, 00:10:50.539 "base_bdevs_list": [ 00:10:50.539 { 00:10:50.539 "name": null, 00:10:50.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.539 "is_configured": false, 00:10:50.539 "data_offset": 0, 00:10:50.539 "data_size": 63488 00:10:50.539 }, 00:10:50.539 { 00:10:50.539 "name": "BaseBdev2", 00:10:50.539 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:50.539 "is_configured": true, 00:10:50.539 "data_offset": 2048, 00:10:50.539 "data_size": 63488 00:10:50.539 } 00:10:50.539 ] 00:10:50.539 }' 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.539 23:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.108 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:51.108 23:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.108 23:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.108 [2024-11-02 23:50:44.935220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:51.108 [2024-11-02 23:50:44.953634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:10:51.108 23:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.108 23:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:51.108 [2024-11-02 23:50:44.956131] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:52.046 23:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:52.046 23:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:52.046 23:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:52.046 23:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:52.046 23:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:52.046 23:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.046 23:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.046 23:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.046 23:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.046 23:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.047 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:52.047 "name": "raid_bdev1", 00:10:52.047 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:52.047 "strip_size_kb": 0, 00:10:52.047 "state": "online", 00:10:52.047 "raid_level": "raid1", 00:10:52.047 "superblock": true, 00:10:52.047 "num_base_bdevs": 2, 00:10:52.047 "num_base_bdevs_discovered": 2, 00:10:52.047 "num_base_bdevs_operational": 2, 00:10:52.047 "process": { 00:10:52.047 "type": "rebuild", 00:10:52.047 "target": "spare", 00:10:52.047 "progress": { 00:10:52.047 "blocks": 20480, 00:10:52.047 "percent": 32 00:10:52.047 } 00:10:52.047 }, 00:10:52.047 "base_bdevs_list": [ 00:10:52.047 { 00:10:52.047 "name": "spare", 00:10:52.047 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:10:52.047 "is_configured": true, 00:10:52.047 "data_offset": 2048, 00:10:52.047 "data_size": 63488 00:10:52.047 }, 00:10:52.047 { 00:10:52.047 "name": "BaseBdev2", 00:10:52.047 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:52.047 "is_configured": true, 00:10:52.047 "data_offset": 2048, 00:10:52.047 "data_size": 63488 00:10:52.047 } 00:10:52.047 ] 00:10:52.047 }' 00:10:52.047 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:52.047 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:52.047 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:52.047 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:52.047 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:52.047 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.047 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.047 [2024-11-02 23:50:46.115322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:52.307 [2024-11-02 23:50:46.160763] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:52.307 [2024-11-02 23:50:46.160827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.307 [2024-11-02 23:50:46.160846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:52.307 [2024-11-02 23:50:46.160854] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.307 "name": "raid_bdev1", 00:10:52.307 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:52.307 "strip_size_kb": 0, 00:10:52.307 "state": "online", 00:10:52.307 "raid_level": "raid1", 00:10:52.307 "superblock": true, 00:10:52.307 "num_base_bdevs": 2, 00:10:52.307 "num_base_bdevs_discovered": 1, 00:10:52.307 "num_base_bdevs_operational": 1, 00:10:52.307 "base_bdevs_list": [ 00:10:52.307 { 00:10:52.307 "name": null, 00:10:52.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.307 "is_configured": false, 00:10:52.307 "data_offset": 0, 00:10:52.307 "data_size": 63488 00:10:52.307 }, 00:10:52.307 { 00:10:52.307 "name": "BaseBdev2", 00:10:52.307 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:52.307 "is_configured": true, 00:10:52.307 "data_offset": 2048, 00:10:52.307 "data_size": 63488 00:10:52.307 } 00:10:52.307 ] 00:10:52.307 }' 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.307 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.566 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:52.566 "name": "raid_bdev1", 00:10:52.566 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:52.566 "strip_size_kb": 0, 00:10:52.566 "state": "online", 00:10:52.566 "raid_level": "raid1", 00:10:52.566 "superblock": true, 00:10:52.566 "num_base_bdevs": 2, 00:10:52.566 "num_base_bdevs_discovered": 1, 00:10:52.566 "num_base_bdevs_operational": 1, 00:10:52.566 "base_bdevs_list": [ 00:10:52.566 { 00:10:52.566 "name": null, 00:10:52.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.566 "is_configured": false, 00:10:52.566 "data_offset": 0, 00:10:52.567 "data_size": 63488 00:10:52.567 }, 00:10:52.567 { 00:10:52.567 "name": "BaseBdev2", 00:10:52.567 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:52.567 "is_configured": true, 00:10:52.567 "data_offset": 2048, 00:10:52.567 "data_size": 63488 00:10:52.567 } 00:10:52.567 ] 00:10:52.567 }' 00:10:52.567 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:52.567 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:52.567 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:52.826 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:52.826 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:52.826 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.826 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.826 [2024-11-02 23:50:46.712858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:52.826 [2024-11-02 23:50:46.717525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:10:52.826 23:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.826 23:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:52.826 [2024-11-02 23:50:46.719428] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.764 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:53.764 "name": "raid_bdev1", 00:10:53.764 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:53.764 "strip_size_kb": 0, 00:10:53.764 "state": "online", 00:10:53.764 "raid_level": "raid1", 00:10:53.764 "superblock": true, 00:10:53.764 "num_base_bdevs": 2, 00:10:53.764 "num_base_bdevs_discovered": 2, 00:10:53.764 "num_base_bdevs_operational": 2, 00:10:53.764 "process": { 00:10:53.764 "type": "rebuild", 00:10:53.764 "target": "spare", 00:10:53.764 "progress": { 00:10:53.764 "blocks": 20480, 00:10:53.764 "percent": 32 00:10:53.764 } 00:10:53.764 }, 00:10:53.764 "base_bdevs_list": [ 00:10:53.764 { 00:10:53.764 "name": "spare", 00:10:53.764 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:10:53.764 "is_configured": true, 00:10:53.764 "data_offset": 2048, 00:10:53.764 "data_size": 63488 00:10:53.764 }, 00:10:53.764 { 00:10:53.764 "name": "BaseBdev2", 00:10:53.764 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:53.764 "is_configured": true, 00:10:53.764 "data_offset": 2048, 00:10:53.765 "data_size": 63488 00:10:53.765 } 00:10:53.765 ] 00:10:53.765 }' 00:10:53.765 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:53.765 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:53.765 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:54.023 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:54.023 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:54.023 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:54.023 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=306 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:54.024 "name": "raid_bdev1", 00:10:54.024 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:54.024 "strip_size_kb": 0, 00:10:54.024 "state": "online", 00:10:54.024 "raid_level": "raid1", 00:10:54.024 "superblock": true, 00:10:54.024 "num_base_bdevs": 2, 00:10:54.024 "num_base_bdevs_discovered": 2, 00:10:54.024 "num_base_bdevs_operational": 2, 00:10:54.024 "process": { 00:10:54.024 "type": "rebuild", 00:10:54.024 "target": "spare", 00:10:54.024 "progress": { 00:10:54.024 "blocks": 22528, 00:10:54.024 "percent": 35 00:10:54.024 } 00:10:54.024 }, 00:10:54.024 "base_bdevs_list": [ 00:10:54.024 { 00:10:54.024 "name": "spare", 00:10:54.024 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:10:54.024 "is_configured": true, 00:10:54.024 "data_offset": 2048, 00:10:54.024 "data_size": 63488 00:10:54.024 }, 00:10:54.024 { 00:10:54.024 "name": "BaseBdev2", 00:10:54.024 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:54.024 "is_configured": true, 00:10:54.024 "data_offset": 2048, 00:10:54.024 "data_size": 63488 00:10:54.024 } 00:10:54.024 ] 00:10:54.024 }' 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:54.024 23:50:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:54.024 23:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.959 23:50:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.217 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:55.217 "name": "raid_bdev1", 00:10:55.217 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:55.217 "strip_size_kb": 0, 00:10:55.217 "state": "online", 00:10:55.217 "raid_level": "raid1", 00:10:55.217 "superblock": true, 00:10:55.217 "num_base_bdevs": 2, 00:10:55.217 "num_base_bdevs_discovered": 2, 00:10:55.217 "num_base_bdevs_operational": 2, 00:10:55.217 "process": { 00:10:55.217 "type": "rebuild", 00:10:55.217 "target": "spare", 00:10:55.217 "progress": { 00:10:55.217 "blocks": 45056, 00:10:55.217 "percent": 70 00:10:55.217 } 00:10:55.217 }, 00:10:55.217 "base_bdevs_list": [ 00:10:55.217 { 00:10:55.217 "name": "spare", 00:10:55.217 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:10:55.217 "is_configured": true, 00:10:55.217 "data_offset": 2048, 00:10:55.217 "data_size": 63488 00:10:55.217 }, 00:10:55.217 { 00:10:55.217 "name": "BaseBdev2", 00:10:55.217 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:55.217 "is_configured": true, 00:10:55.217 "data_offset": 2048, 00:10:55.217 "data_size": 63488 00:10:55.217 } 00:10:55.217 ] 00:10:55.217 }' 00:10:55.217 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:55.217 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:55.217 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:55.217 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:55.217 23:50:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:55.784 [2024-11-02 23:50:49.830716] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:55.784 [2024-11-02 23:50:49.830831] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:55.784 [2024-11-02 23:50:49.830989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:56.351 "name": "raid_bdev1", 00:10:56.351 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:56.351 "strip_size_kb": 0, 00:10:56.351 "state": "online", 00:10:56.351 "raid_level": "raid1", 00:10:56.351 "superblock": true, 00:10:56.351 "num_base_bdevs": 2, 00:10:56.351 "num_base_bdevs_discovered": 2, 00:10:56.351 "num_base_bdevs_operational": 2, 00:10:56.351 "base_bdevs_list": [ 00:10:56.351 { 00:10:56.351 "name": "spare", 00:10:56.351 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:10:56.351 "is_configured": true, 00:10:56.351 "data_offset": 2048, 00:10:56.351 "data_size": 63488 00:10:56.351 }, 00:10:56.351 { 00:10:56.351 "name": "BaseBdev2", 00:10:56.351 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:56.351 "is_configured": true, 00:10:56.351 "data_offset": 2048, 00:10:56.351 "data_size": 63488 00:10:56.351 } 00:10:56.351 ] 00:10:56.351 }' 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:56.351 "name": "raid_bdev1", 00:10:56.351 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:56.351 "strip_size_kb": 0, 00:10:56.351 "state": "online", 00:10:56.351 "raid_level": "raid1", 00:10:56.351 "superblock": true, 00:10:56.351 "num_base_bdevs": 2, 00:10:56.351 "num_base_bdevs_discovered": 2, 00:10:56.351 "num_base_bdevs_operational": 2, 00:10:56.351 "base_bdevs_list": [ 00:10:56.351 { 00:10:56.351 "name": "spare", 00:10:56.351 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:10:56.351 "is_configured": true, 00:10:56.351 "data_offset": 2048, 00:10:56.351 "data_size": 63488 00:10:56.351 }, 00:10:56.351 { 00:10:56.351 "name": "BaseBdev2", 00:10:56.351 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:56.351 "is_configured": true, 00:10:56.351 "data_offset": 2048, 00:10:56.351 "data_size": 63488 00:10:56.351 } 00:10:56.351 ] 00:10:56.351 }' 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.351 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.611 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.611 "name": "raid_bdev1", 00:10:56.611 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:56.611 "strip_size_kb": 0, 00:10:56.611 "state": "online", 00:10:56.611 "raid_level": "raid1", 00:10:56.611 "superblock": true, 00:10:56.611 "num_base_bdevs": 2, 00:10:56.611 "num_base_bdevs_discovered": 2, 00:10:56.611 "num_base_bdevs_operational": 2, 00:10:56.611 "base_bdevs_list": [ 00:10:56.611 { 00:10:56.611 "name": "spare", 00:10:56.611 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:10:56.611 "is_configured": true, 00:10:56.611 "data_offset": 2048, 00:10:56.611 "data_size": 63488 00:10:56.611 }, 00:10:56.611 { 00:10:56.611 "name": "BaseBdev2", 00:10:56.611 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:56.611 "is_configured": true, 00:10:56.611 "data_offset": 2048, 00:10:56.611 "data_size": 63488 00:10:56.611 } 00:10:56.611 ] 00:10:56.611 }' 00:10:56.611 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.611 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.879 [2024-11-02 23:50:50.874284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.879 [2024-11-02 23:50:50.874328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.879 [2024-11-02 23:50:50.874417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.879 [2024-11-02 23:50:50.874500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.879 [2024-11-02 23:50:50.874520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:56.879 23:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:57.138 /dev/nbd0 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.138 1+0 records in 00:10:57.138 1+0 records out 00:10:57.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359263 s, 11.4 MB/s 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:57.138 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:57.397 /dev/nbd1 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:57.397 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.397 1+0 records in 00:10:57.397 1+0 records out 00:10:57.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249232 s, 16.4 MB/s 00:10:57.398 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.398 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:10:57.398 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.398 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:57.398 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:10:57.398 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.398 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:57.398 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:57.657 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.917 [2024-11-02 23:50:51.971915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:57.917 [2024-11-02 23:50:51.971971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.917 [2024-11-02 23:50:51.971991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:57.917 [2024-11-02 23:50:51.972003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.917 [2024-11-02 23:50:51.974193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.917 [2024-11-02 23:50:51.974228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:57.917 [2024-11-02 23:50:51.974306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:57.917 [2024-11-02 23:50:51.974360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:57.917 [2024-11-02 23:50:51.974492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.917 spare 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.917 23:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.181 [2024-11-02 23:50:52.074399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:10:58.181 [2024-11-02 23:50:52.074432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:58.181 [2024-11-02 23:50:52.074765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:10:58.181 [2024-11-02 23:50:52.074949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:10:58.181 [2024-11-02 23:50:52.074965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:10:58.181 [2024-11-02 23:50:52.075110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.181 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.181 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:58.181 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.181 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.182 "name": "raid_bdev1", 00:10:58.182 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:58.182 "strip_size_kb": 0, 00:10:58.182 "state": "online", 00:10:58.182 "raid_level": "raid1", 00:10:58.182 "superblock": true, 00:10:58.182 "num_base_bdevs": 2, 00:10:58.182 "num_base_bdevs_discovered": 2, 00:10:58.182 "num_base_bdevs_operational": 2, 00:10:58.182 "base_bdevs_list": [ 00:10:58.182 { 00:10:58.182 "name": "spare", 00:10:58.182 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:10:58.182 "is_configured": true, 00:10:58.182 "data_offset": 2048, 00:10:58.182 "data_size": 63488 00:10:58.182 }, 00:10:58.182 { 00:10:58.182 "name": "BaseBdev2", 00:10:58.182 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:58.182 "is_configured": true, 00:10:58.182 "data_offset": 2048, 00:10:58.182 "data_size": 63488 00:10:58.182 } 00:10:58.182 ] 00:10:58.182 }' 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.182 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.445 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:58.445 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:58.445 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:58.445 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:58.445 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:58.445 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.445 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.445 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.445 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.704 "name": "raid_bdev1", 00:10:58.704 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:58.704 "strip_size_kb": 0, 00:10:58.704 "state": "online", 00:10:58.704 "raid_level": "raid1", 00:10:58.704 "superblock": true, 00:10:58.704 "num_base_bdevs": 2, 00:10:58.704 "num_base_bdevs_discovered": 2, 00:10:58.704 "num_base_bdevs_operational": 2, 00:10:58.704 "base_bdevs_list": [ 00:10:58.704 { 00:10:58.704 "name": "spare", 00:10:58.704 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:10:58.704 "is_configured": true, 00:10:58.704 "data_offset": 2048, 00:10:58.704 "data_size": 63488 00:10:58.704 }, 00:10:58.704 { 00:10:58.704 "name": "BaseBdev2", 00:10:58.704 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:58.704 "is_configured": true, 00:10:58.704 "data_offset": 2048, 00:10:58.704 "data_size": 63488 00:10:58.704 } 00:10:58.704 ] 00:10:58.704 }' 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.704 [2024-11-02 23:50:52.714714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.704 "name": "raid_bdev1", 00:10:58.704 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:10:58.704 "strip_size_kb": 0, 00:10:58.704 "state": "online", 00:10:58.704 "raid_level": "raid1", 00:10:58.704 "superblock": true, 00:10:58.704 "num_base_bdevs": 2, 00:10:58.704 "num_base_bdevs_discovered": 1, 00:10:58.704 "num_base_bdevs_operational": 1, 00:10:58.704 "base_bdevs_list": [ 00:10:58.704 { 00:10:58.704 "name": null, 00:10:58.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.704 "is_configured": false, 00:10:58.704 "data_offset": 0, 00:10:58.704 "data_size": 63488 00:10:58.704 }, 00:10:58.704 { 00:10:58.704 "name": "BaseBdev2", 00:10:58.704 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:10:58.704 "is_configured": true, 00:10:58.704 "data_offset": 2048, 00:10:58.704 "data_size": 63488 00:10:58.704 } 00:10:58.704 ] 00:10:58.704 }' 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.704 23:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.284 23:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:59.284 23:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.284 23:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.284 [2024-11-02 23:50:53.130045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:59.284 [2024-11-02 23:50:53.130249] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:59.284 [2024-11-02 23:50:53.130271] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:59.284 [2024-11-02 23:50:53.130307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:59.284 [2024-11-02 23:50:53.135013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:10:59.285 23:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.285 23:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:59.285 [2024-11-02 23:50:53.136921] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.224 "name": "raid_bdev1", 00:11:00.224 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:11:00.224 "strip_size_kb": 0, 00:11:00.224 "state": "online", 00:11:00.224 "raid_level": "raid1", 00:11:00.224 "superblock": true, 00:11:00.224 "num_base_bdevs": 2, 00:11:00.224 "num_base_bdevs_discovered": 2, 00:11:00.224 "num_base_bdevs_operational": 2, 00:11:00.224 "process": { 00:11:00.224 "type": "rebuild", 00:11:00.224 "target": "spare", 00:11:00.224 "progress": { 00:11:00.224 "blocks": 20480, 00:11:00.224 "percent": 32 00:11:00.224 } 00:11:00.224 }, 00:11:00.224 "base_bdevs_list": [ 00:11:00.224 { 00:11:00.224 "name": "spare", 00:11:00.224 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:11:00.224 "is_configured": true, 00:11:00.224 "data_offset": 2048, 00:11:00.224 "data_size": 63488 00:11:00.224 }, 00:11:00.224 { 00:11:00.224 "name": "BaseBdev2", 00:11:00.224 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:11:00.224 "is_configured": true, 00:11:00.224 "data_offset": 2048, 00:11:00.224 "data_size": 63488 00:11:00.224 } 00:11:00.224 ] 00:11:00.224 }' 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.224 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:00.225 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:00.225 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.225 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.225 [2024-11-02 23:50:54.277283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:00.484 [2024-11-02 23:50:54.341628] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:00.485 [2024-11-02 23:50:54.341705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.485 [2024-11-02 23:50:54.341722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:00.485 [2024-11-02 23:50:54.341729] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.485 "name": "raid_bdev1", 00:11:00.485 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:11:00.485 "strip_size_kb": 0, 00:11:00.485 "state": "online", 00:11:00.485 "raid_level": "raid1", 00:11:00.485 "superblock": true, 00:11:00.485 "num_base_bdevs": 2, 00:11:00.485 "num_base_bdevs_discovered": 1, 00:11:00.485 "num_base_bdevs_operational": 1, 00:11:00.485 "base_bdevs_list": [ 00:11:00.485 { 00:11:00.485 "name": null, 00:11:00.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.485 "is_configured": false, 00:11:00.485 "data_offset": 0, 00:11:00.485 "data_size": 63488 00:11:00.485 }, 00:11:00.485 { 00:11:00.485 "name": "BaseBdev2", 00:11:00.485 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:11:00.485 "is_configured": true, 00:11:00.485 "data_offset": 2048, 00:11:00.485 "data_size": 63488 00:11:00.485 } 00:11:00.485 ] 00:11:00.485 }' 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.485 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.745 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:00.745 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.745 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.745 [2024-11-02 23:50:54.825849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:00.745 [2024-11-02 23:50:54.825920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.745 [2024-11-02 23:50:54.825946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:00.745 [2024-11-02 23:50:54.825955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.745 [2024-11-02 23:50:54.826382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.745 [2024-11-02 23:50:54.826400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:00.745 [2024-11-02 23:50:54.826487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:00.745 [2024-11-02 23:50:54.826499] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:00.745 [2024-11-02 23:50:54.826515] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:00.745 [2024-11-02 23:50:54.826534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:00.745 [2024-11-02 23:50:54.831194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:11:00.745 spare 00:11:00.745 23:50:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.745 23:50:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:00.745 [2024-11-02 23:50:54.833080] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:02.125 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:02.125 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:02.125 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:02.125 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:02.125 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:02.126 "name": "raid_bdev1", 00:11:02.126 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:11:02.126 "strip_size_kb": 0, 00:11:02.126 "state": "online", 00:11:02.126 "raid_level": "raid1", 00:11:02.126 "superblock": true, 00:11:02.126 "num_base_bdevs": 2, 00:11:02.126 "num_base_bdevs_discovered": 2, 00:11:02.126 "num_base_bdevs_operational": 2, 00:11:02.126 "process": { 00:11:02.126 "type": "rebuild", 00:11:02.126 "target": "spare", 00:11:02.126 "progress": { 00:11:02.126 "blocks": 20480, 00:11:02.126 "percent": 32 00:11:02.126 } 00:11:02.126 }, 00:11:02.126 "base_bdevs_list": [ 00:11:02.126 { 00:11:02.126 "name": "spare", 00:11:02.126 "uuid": "d6196771-3fbf-5df1-9cce-c5411273e255", 00:11:02.126 "is_configured": true, 00:11:02.126 "data_offset": 2048, 00:11:02.126 "data_size": 63488 00:11:02.126 }, 00:11:02.126 { 00:11:02.126 "name": "BaseBdev2", 00:11:02.126 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:11:02.126 "is_configured": true, 00:11:02.126 "data_offset": 2048, 00:11:02.126 "data_size": 63488 00:11:02.126 } 00:11:02.126 ] 00:11:02.126 }' 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.126 23:50:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 [2024-11-02 23:50:55.978165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:02.126 [2024-11-02 23:50:56.037986] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:02.126 [2024-11-02 23:50:56.038052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.126 [2024-11-02 23:50:56.038067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:02.126 [2024-11-02 23:50:56.038077] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.126 "name": "raid_bdev1", 00:11:02.126 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:11:02.126 "strip_size_kb": 0, 00:11:02.126 "state": "online", 00:11:02.126 "raid_level": "raid1", 00:11:02.126 "superblock": true, 00:11:02.126 "num_base_bdevs": 2, 00:11:02.126 "num_base_bdevs_discovered": 1, 00:11:02.126 "num_base_bdevs_operational": 1, 00:11:02.126 "base_bdevs_list": [ 00:11:02.126 { 00:11:02.126 "name": null, 00:11:02.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.126 "is_configured": false, 00:11:02.126 "data_offset": 0, 00:11:02.126 "data_size": 63488 00:11:02.126 }, 00:11:02.126 { 00:11:02.126 "name": "BaseBdev2", 00:11:02.126 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:11:02.126 "is_configured": true, 00:11:02.126 "data_offset": 2048, 00:11:02.126 "data_size": 63488 00:11:02.126 } 00:11:02.126 ] 00:11:02.126 }' 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.126 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.694 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:02.694 "name": "raid_bdev1", 00:11:02.694 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:11:02.694 "strip_size_kb": 0, 00:11:02.694 "state": "online", 00:11:02.694 "raid_level": "raid1", 00:11:02.695 "superblock": true, 00:11:02.695 "num_base_bdevs": 2, 00:11:02.695 "num_base_bdevs_discovered": 1, 00:11:02.695 "num_base_bdevs_operational": 1, 00:11:02.695 "base_bdevs_list": [ 00:11:02.695 { 00:11:02.695 "name": null, 00:11:02.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.695 "is_configured": false, 00:11:02.695 "data_offset": 0, 00:11:02.695 "data_size": 63488 00:11:02.695 }, 00:11:02.695 { 00:11:02.695 "name": "BaseBdev2", 00:11:02.695 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:11:02.695 "is_configured": true, 00:11:02.695 "data_offset": 2048, 00:11:02.695 "data_size": 63488 00:11:02.695 } 00:11:02.695 ] 00:11:02.695 }' 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.695 [2024-11-02 23:50:56.673922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:02.695 [2024-11-02 23:50:56.673994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.695 [2024-11-02 23:50:56.674014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:02.695 [2024-11-02 23:50:56.674024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.695 [2024-11-02 23:50:56.674409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.695 [2024-11-02 23:50:56.674429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:02.695 [2024-11-02 23:50:56.674501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:02.695 [2024-11-02 23:50:56.674519] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:02.695 [2024-11-02 23:50:56.674526] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:02.695 [2024-11-02 23:50:56.674538] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:02.695 BaseBdev1 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.695 23:50:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.633 23:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.893 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.893 "name": "raid_bdev1", 00:11:03.893 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:11:03.893 "strip_size_kb": 0, 00:11:03.893 "state": "online", 00:11:03.893 "raid_level": "raid1", 00:11:03.893 "superblock": true, 00:11:03.893 "num_base_bdevs": 2, 00:11:03.893 "num_base_bdevs_discovered": 1, 00:11:03.893 "num_base_bdevs_operational": 1, 00:11:03.893 "base_bdevs_list": [ 00:11:03.893 { 00:11:03.893 "name": null, 00:11:03.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.893 "is_configured": false, 00:11:03.893 "data_offset": 0, 00:11:03.893 "data_size": 63488 00:11:03.893 }, 00:11:03.893 { 00:11:03.893 "name": "BaseBdev2", 00:11:03.893 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:11:03.893 "is_configured": true, 00:11:03.893 "data_offset": 2048, 00:11:03.893 "data_size": 63488 00:11:03.893 } 00:11:03.893 ] 00:11:03.893 }' 00:11:03.893 23:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.893 23:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.152 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:04.153 "name": "raid_bdev1", 00:11:04.153 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:11:04.153 "strip_size_kb": 0, 00:11:04.153 "state": "online", 00:11:04.153 "raid_level": "raid1", 00:11:04.153 "superblock": true, 00:11:04.153 "num_base_bdevs": 2, 00:11:04.153 "num_base_bdevs_discovered": 1, 00:11:04.153 "num_base_bdevs_operational": 1, 00:11:04.153 "base_bdevs_list": [ 00:11:04.153 { 00:11:04.153 "name": null, 00:11:04.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.153 "is_configured": false, 00:11:04.153 "data_offset": 0, 00:11:04.153 "data_size": 63488 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "name": "BaseBdev2", 00:11:04.153 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:11:04.153 "is_configured": true, 00:11:04.153 "data_offset": 2048, 00:11:04.153 "data_size": 63488 00:11:04.153 } 00:11:04.153 ] 00:11:04.153 }' 00:11:04.153 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.412 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.413 [2024-11-02 23:50:58.315207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.413 [2024-11-02 23:50:58.315376] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:04.413 [2024-11-02 23:50:58.315403] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:04.413 request: 00:11:04.413 { 00:11:04.413 "base_bdev": "BaseBdev1", 00:11:04.413 "raid_bdev": "raid_bdev1", 00:11:04.413 "method": "bdev_raid_add_base_bdev", 00:11:04.413 "req_id": 1 00:11:04.413 } 00:11:04.413 Got JSON-RPC error response 00:11:04.413 response: 00:11:04.413 { 00:11:04.413 "code": -22, 00:11:04.413 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:04.413 } 00:11:04.413 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:04.413 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:04.413 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:04.413 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:04.413 23:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:04.413 23:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.352 "name": "raid_bdev1", 00:11:05.352 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:11:05.352 "strip_size_kb": 0, 00:11:05.352 "state": "online", 00:11:05.352 "raid_level": "raid1", 00:11:05.352 "superblock": true, 00:11:05.352 "num_base_bdevs": 2, 00:11:05.352 "num_base_bdevs_discovered": 1, 00:11:05.352 "num_base_bdevs_operational": 1, 00:11:05.352 "base_bdevs_list": [ 00:11:05.352 { 00:11:05.352 "name": null, 00:11:05.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.352 "is_configured": false, 00:11:05.352 "data_offset": 0, 00:11:05.352 "data_size": 63488 00:11:05.352 }, 00:11:05.352 { 00:11:05.352 "name": "BaseBdev2", 00:11:05.352 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:11:05.352 "is_configured": true, 00:11:05.352 "data_offset": 2048, 00:11:05.352 "data_size": 63488 00:11:05.352 } 00:11:05.352 ] 00:11:05.352 }' 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.352 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.919 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:05.919 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.919 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.920 "name": "raid_bdev1", 00:11:05.920 "uuid": "82bcbd84-2c2f-42fe-848e-c9cd0d615469", 00:11:05.920 "strip_size_kb": 0, 00:11:05.920 "state": "online", 00:11:05.920 "raid_level": "raid1", 00:11:05.920 "superblock": true, 00:11:05.920 "num_base_bdevs": 2, 00:11:05.920 "num_base_bdevs_discovered": 1, 00:11:05.920 "num_base_bdevs_operational": 1, 00:11:05.920 "base_bdevs_list": [ 00:11:05.920 { 00:11:05.920 "name": null, 00:11:05.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.920 "is_configured": false, 00:11:05.920 "data_offset": 0, 00:11:05.920 "data_size": 63488 00:11:05.920 }, 00:11:05.920 { 00:11:05.920 "name": "BaseBdev2", 00:11:05.920 "uuid": "e1ab41ba-1510-5b09-b430-2ecd58f03cd5", 00:11:05.920 "is_configured": true, 00:11:05.920 "data_offset": 2048, 00:11:05.920 "data_size": 63488 00:11:05.920 } 00:11:05.920 ] 00:11:05.920 }' 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86238 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 86238 ']' 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 86238 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86238 00:11:05.920 killing process with pid 86238 00:11:05.920 Received shutdown signal, test time was about 60.000000 seconds 00:11:05.920 00:11:05.920 Latency(us) 00:11:05.920 [2024-11-02T23:51:00.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.920 [2024-11-02T23:51:00.015Z] =================================================================================================================== 00:11:05.920 [2024-11-02T23:51:00.015Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86238' 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 86238 00:11:05.920 [2024-11-02 23:50:59.968514] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.920 [2024-11-02 23:50:59.968652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.920 23:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 86238 00:11:05.920 [2024-11-02 23:50:59.968706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.920 [2024-11-02 23:50:59.968715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:05.920 [2024-11-02 23:50:59.999237] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.179 23:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:06.179 00:11:06.179 real 0m21.886s 00:11:06.179 user 0m26.709s 00:11:06.179 sys 0m3.816s 00:11:06.179 23:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.179 ************************************ 00:11:06.179 END TEST raid_rebuild_test_sb 00:11:06.179 ************************************ 00:11:06.179 23:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.179 23:51:00 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:06.179 23:51:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:11:06.179 23:51:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.179 23:51:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.438 ************************************ 00:11:06.438 START TEST raid_rebuild_test_io 00:11:06.438 ************************************ 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=86956 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 86956 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 86956 ']' 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:06.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:06.438 23:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.438 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:06.438 Zero copy mechanism will not be used. 00:11:06.438 [2024-11-02 23:51:00.378141] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:11:06.438 [2024-11-02 23:51:00.378279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86956 ] 00:11:06.697 [2024-11-02 23:51:00.532668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.697 [2024-11-02 23:51:00.562050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.697 [2024-11-02 23:51:00.604697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.697 [2024-11-02 23:51:00.604765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.267 BaseBdev1_malloc 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.267 [2024-11-02 23:51:01.230767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:07.267 [2024-11-02 23:51:01.230819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.267 [2024-11-02 23:51:01.230841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:07.267 [2024-11-02 23:51:01.230855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.267 [2024-11-02 23:51:01.233030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.267 [2024-11-02 23:51:01.233067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:07.267 BaseBdev1 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.267 BaseBdev2_malloc 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:07.267 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.268 [2024-11-02 23:51:01.263097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:07.268 [2024-11-02 23:51:01.263142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.268 [2024-11-02 23:51:01.263162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:07.268 [2024-11-02 23:51:01.263170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.268 [2024-11-02 23:51:01.265252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.268 [2024-11-02 23:51:01.265292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:07.268 BaseBdev2 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.268 spare_malloc 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.268 spare_delay 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.268 [2024-11-02 23:51:01.303373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:07.268 [2024-11-02 23:51:01.303421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.268 [2024-11-02 23:51:01.303457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:07.268 [2024-11-02 23:51:01.303465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.268 [2024-11-02 23:51:01.305498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.268 [2024-11-02 23:51:01.305532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:07.268 spare 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.268 [2024-11-02 23:51:01.315393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.268 [2024-11-02 23:51:01.317210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.268 [2024-11-02 23:51:01.317300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:07.268 [2024-11-02 23:51:01.317312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:07.268 [2024-11-02 23:51:01.317560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:07.268 [2024-11-02 23:51:01.317683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:07.268 [2024-11-02 23:51:01.317695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:07.268 [2024-11-02 23:51:01.317816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.268 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.527 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.527 "name": "raid_bdev1", 00:11:07.527 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:07.527 "strip_size_kb": 0, 00:11:07.527 "state": "online", 00:11:07.527 "raid_level": "raid1", 00:11:07.527 "superblock": false, 00:11:07.527 "num_base_bdevs": 2, 00:11:07.527 "num_base_bdevs_discovered": 2, 00:11:07.527 "num_base_bdevs_operational": 2, 00:11:07.527 "base_bdevs_list": [ 00:11:07.527 { 00:11:07.527 "name": "BaseBdev1", 00:11:07.527 "uuid": "b15856fe-5f26-5627-8243-7f041e316544", 00:11:07.527 "is_configured": true, 00:11:07.527 "data_offset": 0, 00:11:07.527 "data_size": 65536 00:11:07.527 }, 00:11:07.527 { 00:11:07.527 "name": "BaseBdev2", 00:11:07.527 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:07.527 "is_configured": true, 00:11:07.527 "data_offset": 0, 00:11:07.527 "data_size": 65536 00:11:07.527 } 00:11:07.527 ] 00:11:07.527 }' 00:11:07.527 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.527 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.786 [2024-11-02 23:51:01.814843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.786 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.045 [2024-11-02 23:51:01.902425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.045 "name": "raid_bdev1", 00:11:08.045 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:08.045 "strip_size_kb": 0, 00:11:08.045 "state": "online", 00:11:08.045 "raid_level": "raid1", 00:11:08.045 "superblock": false, 00:11:08.045 "num_base_bdevs": 2, 00:11:08.045 "num_base_bdevs_discovered": 1, 00:11:08.045 "num_base_bdevs_operational": 1, 00:11:08.045 "base_bdevs_list": [ 00:11:08.045 { 00:11:08.045 "name": null, 00:11:08.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.045 "is_configured": false, 00:11:08.045 "data_offset": 0, 00:11:08.045 "data_size": 65536 00:11:08.045 }, 00:11:08.045 { 00:11:08.045 "name": "BaseBdev2", 00:11:08.045 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:08.045 "is_configured": true, 00:11:08.045 "data_offset": 0, 00:11:08.045 "data_size": 65536 00:11:08.045 } 00:11:08.045 ] 00:11:08.045 }' 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.045 23:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.045 [2024-11-02 23:51:02.012204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:08.045 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:08.045 Zero copy mechanism will not be used. 00:11:08.045 Running I/O for 60 seconds... 00:11:08.307 23:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:08.307 23:51:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.307 23:51:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.307 [2024-11-02 23:51:02.357307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:08.566 23:51:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.566 23:51:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:08.566 [2024-11-02 23:51:02.416658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:08.566 [2024-11-02 23:51:02.418986] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:08.566 [2024-11-02 23:51:02.520380] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:08.566 [2024-11-02 23:51:02.520961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:08.826 [2024-11-02 23:51:02.727816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:08.826 [2024-11-02 23:51:02.728098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:09.086 153.00 IOPS, 459.00 MiB/s [2024-11-02T23:51:03.181Z] [2024-11-02 23:51:03.066698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:09.345 [2024-11-02 23:51:03.186365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:09.345 [2024-11-02 23:51:03.186621] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:09.345 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:09.345 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.345 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:09.345 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:09.345 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.345 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.345 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.345 23:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.345 23:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.345 23:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.605 "name": "raid_bdev1", 00:11:09.605 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:09.605 "strip_size_kb": 0, 00:11:09.605 "state": "online", 00:11:09.605 "raid_level": "raid1", 00:11:09.605 "superblock": false, 00:11:09.605 "num_base_bdevs": 2, 00:11:09.605 "num_base_bdevs_discovered": 2, 00:11:09.605 "num_base_bdevs_operational": 2, 00:11:09.605 "process": { 00:11:09.605 "type": "rebuild", 00:11:09.605 "target": "spare", 00:11:09.605 "progress": { 00:11:09.605 "blocks": 12288, 00:11:09.605 "percent": 18 00:11:09.605 } 00:11:09.605 }, 00:11:09.605 "base_bdevs_list": [ 00:11:09.605 { 00:11:09.605 "name": "spare", 00:11:09.605 "uuid": "35390163-69a7-5660-9710-c32a934b817d", 00:11:09.605 "is_configured": true, 00:11:09.605 "data_offset": 0, 00:11:09.605 "data_size": 65536 00:11:09.605 }, 00:11:09.605 { 00:11:09.605 "name": "BaseBdev2", 00:11:09.605 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:09.605 "is_configured": true, 00:11:09.605 "data_offset": 0, 00:11:09.605 "data_size": 65536 00:11:09.605 } 00:11:09.605 ] 00:11:09.605 }' 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.605 [2024-11-02 23:51:03.506251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.605 [2024-11-02 23:51:03.550276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:09.605 [2024-11-02 23:51:03.639173] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:09.605 [2024-11-02 23:51:03.653173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.605 [2024-11-02 23:51:03.653276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:09.605 [2024-11-02 23:51:03.653305] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:09.605 [2024-11-02 23:51:03.676051] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.605 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.864 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.864 23:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.864 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.864 23:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.865 23:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.865 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.865 "name": "raid_bdev1", 00:11:09.865 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:09.865 "strip_size_kb": 0, 00:11:09.865 "state": "online", 00:11:09.865 "raid_level": "raid1", 00:11:09.865 "superblock": false, 00:11:09.865 "num_base_bdevs": 2, 00:11:09.865 "num_base_bdevs_discovered": 1, 00:11:09.865 "num_base_bdevs_operational": 1, 00:11:09.865 "base_bdevs_list": [ 00:11:09.865 { 00:11:09.865 "name": null, 00:11:09.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.865 "is_configured": false, 00:11:09.865 "data_offset": 0, 00:11:09.865 "data_size": 65536 00:11:09.865 }, 00:11:09.865 { 00:11:09.865 "name": "BaseBdev2", 00:11:09.865 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:09.865 "is_configured": true, 00:11:09.865 "data_offset": 0, 00:11:09.865 "data_size": 65536 00:11:09.865 } 00:11:09.865 ] 00:11:09.865 }' 00:11:09.865 23:51:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.865 23:51:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.123 152.00 IOPS, 456.00 MiB/s [2024-11-02T23:51:04.218Z] 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.123 "name": "raid_bdev1", 00:11:10.123 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:10.123 "strip_size_kb": 0, 00:11:10.123 "state": "online", 00:11:10.123 "raid_level": "raid1", 00:11:10.123 "superblock": false, 00:11:10.123 "num_base_bdevs": 2, 00:11:10.123 "num_base_bdevs_discovered": 1, 00:11:10.123 "num_base_bdevs_operational": 1, 00:11:10.123 "base_bdevs_list": [ 00:11:10.123 { 00:11:10.123 "name": null, 00:11:10.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.123 "is_configured": false, 00:11:10.123 "data_offset": 0, 00:11:10.123 "data_size": 65536 00:11:10.123 }, 00:11:10.123 { 00:11:10.123 "name": "BaseBdev2", 00:11:10.123 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:10.123 "is_configured": true, 00:11:10.123 "data_offset": 0, 00:11:10.123 "data_size": 65536 00:11:10.123 } 00:11:10.123 ] 00:11:10.123 }' 00:11:10.123 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.383 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:10.383 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.383 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:10.383 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:10.383 23:51:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.383 23:51:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.383 [2024-11-02 23:51:04.284514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:10.383 23:51:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.383 23:51:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:10.383 [2024-11-02 23:51:04.338714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:10.383 [2024-11-02 23:51:04.340624] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:10.383 [2024-11-02 23:51:04.447239] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:10.383 [2024-11-02 23:51:04.447801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:10.655 [2024-11-02 23:51:04.662317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:10.655 [2024-11-02 23:51:04.662570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:10.918 [2024-11-02 23:51:05.010836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:11.178 163.00 IOPS, 489.00 MiB/s [2024-11-02T23:51:05.273Z] [2024-11-02 23:51:05.139010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:11.178 [2024-11-02 23:51:05.139242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.439 "name": "raid_bdev1", 00:11:11.439 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:11.439 "strip_size_kb": 0, 00:11:11.439 "state": "online", 00:11:11.439 "raid_level": "raid1", 00:11:11.439 "superblock": false, 00:11:11.439 "num_base_bdevs": 2, 00:11:11.439 "num_base_bdevs_discovered": 2, 00:11:11.439 "num_base_bdevs_operational": 2, 00:11:11.439 "process": { 00:11:11.439 "type": "rebuild", 00:11:11.439 "target": "spare", 00:11:11.439 "progress": { 00:11:11.439 "blocks": 10240, 00:11:11.439 "percent": 15 00:11:11.439 } 00:11:11.439 }, 00:11:11.439 "base_bdevs_list": [ 00:11:11.439 { 00:11:11.439 "name": "spare", 00:11:11.439 "uuid": "35390163-69a7-5660-9710-c32a934b817d", 00:11:11.439 "is_configured": true, 00:11:11.439 "data_offset": 0, 00:11:11.439 "data_size": 65536 00:11:11.439 }, 00:11:11.439 { 00:11:11.439 "name": "BaseBdev2", 00:11:11.439 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:11.439 "is_configured": true, 00:11:11.439 "data_offset": 0, 00:11:11.439 "data_size": 65536 00:11:11.439 } 00:11:11.439 ] 00:11:11.439 }' 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=324 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.439 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.439 "name": "raid_bdev1", 00:11:11.439 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:11.439 "strip_size_kb": 0, 00:11:11.439 "state": "online", 00:11:11.439 "raid_level": "raid1", 00:11:11.439 "superblock": false, 00:11:11.439 "num_base_bdevs": 2, 00:11:11.439 "num_base_bdevs_discovered": 2, 00:11:11.439 "num_base_bdevs_operational": 2, 00:11:11.439 "process": { 00:11:11.439 "type": "rebuild", 00:11:11.439 "target": "spare", 00:11:11.439 "progress": { 00:11:11.439 "blocks": 14336, 00:11:11.439 "percent": 21 00:11:11.439 } 00:11:11.439 }, 00:11:11.439 "base_bdevs_list": [ 00:11:11.439 { 00:11:11.439 "name": "spare", 00:11:11.439 "uuid": "35390163-69a7-5660-9710-c32a934b817d", 00:11:11.439 "is_configured": true, 00:11:11.439 "data_offset": 0, 00:11:11.439 "data_size": 65536 00:11:11.439 }, 00:11:11.439 { 00:11:11.439 "name": "BaseBdev2", 00:11:11.439 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:11.439 "is_configured": true, 00:11:11.440 "data_offset": 0, 00:11:11.440 "data_size": 65536 00:11:11.440 } 00:11:11.440 ] 00:11:11.440 }' 00:11:11.440 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.699 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:11.699 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.699 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:11.699 23:51:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:11.959 [2024-11-02 23:51:05.793034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:12.527 141.25 IOPS, 423.75 MiB/s [2024-11-02T23:51:06.622Z] 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:12.527 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:12.527 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:12.527 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:12.527 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:12.527 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:12.527 [2024-11-02 23:51:06.605453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:12.527 [2024-11-02 23:51:06.605689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:12.527 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.527 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.527 23:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.527 23:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.786 23:51:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.786 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:12.786 "name": "raid_bdev1", 00:11:12.786 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:12.786 "strip_size_kb": 0, 00:11:12.786 "state": "online", 00:11:12.786 "raid_level": "raid1", 00:11:12.786 "superblock": false, 00:11:12.786 "num_base_bdevs": 2, 00:11:12.786 "num_base_bdevs_discovered": 2, 00:11:12.786 "num_base_bdevs_operational": 2, 00:11:12.786 "process": { 00:11:12.786 "type": "rebuild", 00:11:12.786 "target": "spare", 00:11:12.786 "progress": { 00:11:12.786 "blocks": 34816, 00:11:12.786 "percent": 53 00:11:12.786 } 00:11:12.786 }, 00:11:12.786 "base_bdevs_list": [ 00:11:12.786 { 00:11:12.786 "name": "spare", 00:11:12.786 "uuid": "35390163-69a7-5660-9710-c32a934b817d", 00:11:12.786 "is_configured": true, 00:11:12.786 "data_offset": 0, 00:11:12.786 "data_size": 65536 00:11:12.786 }, 00:11:12.786 { 00:11:12.786 "name": "BaseBdev2", 00:11:12.786 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:12.786 "is_configured": true, 00:11:12.786 "data_offset": 0, 00:11:12.786 "data_size": 65536 00:11:12.786 } 00:11:12.786 ] 00:11:12.786 }' 00:11:12.786 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:12.786 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:12.786 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:12.786 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:12.786 23:51:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:13.303 121.60 IOPS, 364.80 MiB/s [2024-11-02T23:51:07.398Z] [2024-11-02 23:51:07.386374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:13.561 [2024-11-02 23:51:07.600468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.821 "name": "raid_bdev1", 00:11:13.821 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:13.821 "strip_size_kb": 0, 00:11:13.821 "state": "online", 00:11:13.821 "raid_level": "raid1", 00:11:13.821 "superblock": false, 00:11:13.821 "num_base_bdevs": 2, 00:11:13.821 "num_base_bdevs_discovered": 2, 00:11:13.821 "num_base_bdevs_operational": 2, 00:11:13.821 "process": { 00:11:13.821 "type": "rebuild", 00:11:13.821 "target": "spare", 00:11:13.821 "progress": { 00:11:13.821 "blocks": 53248, 00:11:13.821 "percent": 81 00:11:13.821 } 00:11:13.821 }, 00:11:13.821 "base_bdevs_list": [ 00:11:13.821 { 00:11:13.821 "name": "spare", 00:11:13.821 "uuid": "35390163-69a7-5660-9710-c32a934b817d", 00:11:13.821 "is_configured": true, 00:11:13.821 "data_offset": 0, 00:11:13.821 "data_size": 65536 00:11:13.821 }, 00:11:13.821 { 00:11:13.821 "name": "BaseBdev2", 00:11:13.821 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:13.821 "is_configured": true, 00:11:13.821 "data_offset": 0, 00:11:13.821 "data_size": 65536 00:11:13.821 } 00:11:13.821 ] 00:11:13.821 }' 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:13.821 23:51:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:14.080 [2024-11-02 23:51:07.922766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:14.080 107.50 IOPS, 322.50 MiB/s [2024-11-02T23:51:08.175Z] [2024-11-02 23:51:08.035836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:14.652 [2024-11-02 23:51:08.460775] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:14.652 [2024-11-02 23:51:08.565640] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:14.652 [2024-11-02 23:51:08.567769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.911 "name": "raid_bdev1", 00:11:14.911 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:14.911 "strip_size_kb": 0, 00:11:14.911 "state": "online", 00:11:14.911 "raid_level": "raid1", 00:11:14.911 "superblock": false, 00:11:14.911 "num_base_bdevs": 2, 00:11:14.911 "num_base_bdevs_discovered": 2, 00:11:14.911 "num_base_bdevs_operational": 2, 00:11:14.911 "base_bdevs_list": [ 00:11:14.911 { 00:11:14.911 "name": "spare", 00:11:14.911 "uuid": "35390163-69a7-5660-9710-c32a934b817d", 00:11:14.911 "is_configured": true, 00:11:14.911 "data_offset": 0, 00:11:14.911 "data_size": 65536 00:11:14.911 }, 00:11:14.911 { 00:11:14.911 "name": "BaseBdev2", 00:11:14.911 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:14.911 "is_configured": true, 00:11:14.911 "data_offset": 0, 00:11:14.911 "data_size": 65536 00:11:14.911 } 00:11:14.911 ] 00:11:14.911 }' 00:11:14.911 23:51:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.911 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:14.911 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.170 96.71 IOPS, 290.14 MiB/s [2024-11-02T23:51:09.265Z] 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.170 "name": "raid_bdev1", 00:11:15.170 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:15.170 "strip_size_kb": 0, 00:11:15.170 "state": "online", 00:11:15.170 "raid_level": "raid1", 00:11:15.170 "superblock": false, 00:11:15.170 "num_base_bdevs": 2, 00:11:15.170 "num_base_bdevs_discovered": 2, 00:11:15.170 "num_base_bdevs_operational": 2, 00:11:15.170 "base_bdevs_list": [ 00:11:15.170 { 00:11:15.170 "name": "spare", 00:11:15.170 "uuid": "35390163-69a7-5660-9710-c32a934b817d", 00:11:15.170 "is_configured": true, 00:11:15.170 "data_offset": 0, 00:11:15.170 "data_size": 65536 00:11:15.170 }, 00:11:15.170 { 00:11:15.170 "name": "BaseBdev2", 00:11:15.170 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:15.170 "is_configured": true, 00:11:15.170 "data_offset": 0, 00:11:15.170 "data_size": 65536 00:11:15.170 } 00:11:15.170 ] 00:11:15.170 }' 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.170 "name": "raid_bdev1", 00:11:15.170 "uuid": "1664e8a7-f6fd-4162-b75c-a756ef184853", 00:11:15.170 "strip_size_kb": 0, 00:11:15.170 "state": "online", 00:11:15.170 "raid_level": "raid1", 00:11:15.170 "superblock": false, 00:11:15.170 "num_base_bdevs": 2, 00:11:15.170 "num_base_bdevs_discovered": 2, 00:11:15.170 "num_base_bdevs_operational": 2, 00:11:15.170 "base_bdevs_list": [ 00:11:15.170 { 00:11:15.170 "name": "spare", 00:11:15.170 "uuid": "35390163-69a7-5660-9710-c32a934b817d", 00:11:15.170 "is_configured": true, 00:11:15.170 "data_offset": 0, 00:11:15.170 "data_size": 65536 00:11:15.170 }, 00:11:15.170 { 00:11:15.170 "name": "BaseBdev2", 00:11:15.170 "uuid": "0de74136-b94d-519f-8b2a-be88f5fad152", 00:11:15.170 "is_configured": true, 00:11:15.170 "data_offset": 0, 00:11:15.170 "data_size": 65536 00:11:15.170 } 00:11:15.170 ] 00:11:15.170 }' 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.170 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.738 [2024-11-02 23:51:09.626062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.738 [2024-11-02 23:51:09.626098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.738 00:11:15.738 Latency(us) 00:11:15.738 [2024-11-02T23:51:09.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.738 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:15.738 raid_bdev1 : 7.64 92.82 278.46 0.00 0.00 15241.81 282.61 109436.53 00:11:15.738 [2024-11-02T23:51:09.833Z] =================================================================================================================== 00:11:15.738 [2024-11-02T23:51:09.833Z] Total : 92.82 278.46 0.00 0.00 15241.81 282.61 109436.53 00:11:15.738 [2024-11-02 23:51:09.641200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.738 [2024-11-02 23:51:09.641242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.738 [2024-11-02 23:51:09.641314] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.738 [2024-11-02 23:51:09.641333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:15.738 { 00:11:15.738 "results": [ 00:11:15.738 { 00:11:15.738 "job": "raid_bdev1", 00:11:15.738 "core_mask": "0x1", 00:11:15.738 "workload": "randrw", 00:11:15.738 "percentage": 50, 00:11:15.738 "status": "finished", 00:11:15.738 "queue_depth": 2, 00:11:15.738 "io_size": 3145728, 00:11:15.738 "runtime": 7.638326, 00:11:15.738 "iops": 92.8213852092723, 00:11:15.738 "mibps": 278.46415562781687, 00:11:15.738 "io_failed": 0, 00:11:15.738 "io_timeout": 0, 00:11:15.738 "avg_latency_us": 15241.81059244523, 00:11:15.738 "min_latency_us": 282.6061135371179, 00:11:15.738 "max_latency_us": 109436.5344978166 00:11:15.738 } 00:11:15.738 ], 00:11:15.738 "core_count": 1 00:11:15.738 } 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:15.738 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:15.997 /dev/nbd0 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.997 1+0 records in 00:11:15.997 1+0 records out 00:11:15.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371962 s, 11.0 MB/s 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:15.997 23:51:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:16.256 /dev/nbd1 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.256 1+0 records in 00:11:16.256 1+0 records out 00:11:16.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485022 s, 8.4 MB/s 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.256 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.515 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 86956 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 86956 ']' 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 86956 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:16.773 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86956 00:11:16.773 killing process with pid 86956 00:11:16.773 Received shutdown signal, test time was about 8.751442 seconds 00:11:16.773 00:11:16.773 Latency(us) 00:11:16.773 [2024-11-02T23:51:10.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.773 [2024-11-02T23:51:10.868Z] =================================================================================================================== 00:11:16.774 [2024-11-02T23:51:10.869Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:16.774 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:16.774 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:16.774 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86956' 00:11:16.774 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 86956 00:11:16.774 [2024-11-02 23:51:10.748917] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.774 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 86956 00:11:16.774 [2024-11-02 23:51:10.774576] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.032 ************************************ 00:11:17.032 END TEST raid_rebuild_test_io 00:11:17.032 23:51:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:17.032 00:11:17.032 real 0m10.700s 00:11:17.032 user 0m13.947s 00:11:17.032 sys 0m1.446s 00:11:17.032 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.032 23:51:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.032 ************************************ 00:11:17.032 23:51:11 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:17.032 23:51:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:11:17.032 23:51:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.032 23:51:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.032 ************************************ 00:11:17.032 START TEST raid_rebuild_test_sb_io 00:11:17.032 ************************************ 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:17.032 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:17.033 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87323 00:11:17.033 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87323 00:11:17.033 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:17.033 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 87323 ']' 00:11:17.033 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.033 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:17.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.033 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.033 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:17.033 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.292 [2024-11-02 23:51:11.157478] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:11:17.292 [2024-11-02 23:51:11.157644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87323 ] 00:11:17.292 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:17.292 Zero copy mechanism will not be used. 00:11:17.292 [2024-11-02 23:51:11.316183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.292 [2024-11-02 23:51:11.342275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.292 [2024-11-02 23:51:11.384214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.292 [2024-11-02 23:51:11.384250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.230 BaseBdev1_malloc 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.230 [2024-11-02 23:51:11.989644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:18.230 [2024-11-02 23:51:11.989707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.230 [2024-11-02 23:51:11.989729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:18.230 [2024-11-02 23:51:11.989758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.230 [2024-11-02 23:51:11.991807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.230 [2024-11-02 23:51:11.991839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:18.230 BaseBdev1 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.230 23:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.230 BaseBdev2_malloc 00:11:18.230 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.230 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:18.230 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.230 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.230 [2024-11-02 23:51:12.017842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:18.230 [2024-11-02 23:51:12.017883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.230 [2024-11-02 23:51:12.017917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:18.230 [2024-11-02 23:51:12.017926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.230 [2024-11-02 23:51:12.019977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.230 [2024-11-02 23:51:12.020014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:18.230 BaseBdev2 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.231 spare_malloc 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.231 spare_delay 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.231 [2024-11-02 23:51:12.058015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:18.231 [2024-11-02 23:51:12.058061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.231 [2024-11-02 23:51:12.058098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:18.231 [2024-11-02 23:51:12.058107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.231 [2024-11-02 23:51:12.060243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.231 [2024-11-02 23:51:12.060275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:18.231 spare 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.231 [2024-11-02 23:51:12.070038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.231 [2024-11-02 23:51:12.071801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.231 [2024-11-02 23:51:12.071948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:18.231 [2024-11-02 23:51:12.071961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:18.231 [2024-11-02 23:51:12.072224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:18.231 [2024-11-02 23:51:12.072372] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:18.231 [2024-11-02 23:51:12.072392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:18.231 [2024-11-02 23:51:12.072500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.231 "name": "raid_bdev1", 00:11:18.231 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:18.231 "strip_size_kb": 0, 00:11:18.231 "state": "online", 00:11:18.231 "raid_level": "raid1", 00:11:18.231 "superblock": true, 00:11:18.231 "num_base_bdevs": 2, 00:11:18.231 "num_base_bdevs_discovered": 2, 00:11:18.231 "num_base_bdevs_operational": 2, 00:11:18.231 "base_bdevs_list": [ 00:11:18.231 { 00:11:18.231 "name": "BaseBdev1", 00:11:18.231 "uuid": "0eddff7d-6730-5406-b89b-9494790e1fbc", 00:11:18.231 "is_configured": true, 00:11:18.231 "data_offset": 2048, 00:11:18.231 "data_size": 63488 00:11:18.231 }, 00:11:18.231 { 00:11:18.231 "name": "BaseBdev2", 00:11:18.231 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:18.231 "is_configured": true, 00:11:18.231 "data_offset": 2048, 00:11:18.231 "data_size": 63488 00:11:18.231 } 00:11:18.231 ] 00:11:18.231 }' 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.231 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.491 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.491 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.491 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.491 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:18.491 [2024-11-02 23:51:12.541453] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.491 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.491 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.751 [2024-11-02 23:51:12.625048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.751 "name": "raid_bdev1", 00:11:18.751 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:18.751 "strip_size_kb": 0, 00:11:18.751 "state": "online", 00:11:18.751 "raid_level": "raid1", 00:11:18.751 "superblock": true, 00:11:18.751 "num_base_bdevs": 2, 00:11:18.751 "num_base_bdevs_discovered": 1, 00:11:18.751 "num_base_bdevs_operational": 1, 00:11:18.751 "base_bdevs_list": [ 00:11:18.751 { 00:11:18.751 "name": null, 00:11:18.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.751 "is_configured": false, 00:11:18.751 "data_offset": 0, 00:11:18.751 "data_size": 63488 00:11:18.751 }, 00:11:18.751 { 00:11:18.751 "name": "BaseBdev2", 00:11:18.751 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:18.751 "is_configured": true, 00:11:18.751 "data_offset": 2048, 00:11:18.751 "data_size": 63488 00:11:18.751 } 00:11:18.751 ] 00:11:18.751 }' 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.751 23:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.751 [2024-11-02 23:51:12.714854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:18.751 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:18.751 Zero copy mechanism will not be used. 00:11:18.751 Running I/O for 60 seconds... 00:11:19.011 23:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:19.011 23:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.011 23:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.011 [2024-11-02 23:51:13.066216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:19.269 23:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.269 23:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:19.269 [2024-11-02 23:51:13.129970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:19.269 [2024-11-02 23:51:13.131926] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:19.269 [2024-11-02 23:51:13.255285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:19.269 [2024-11-02 23:51:13.255616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:19.527 [2024-11-02 23:51:13.363450] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:19.527 [2024-11-02 23:51:13.363770] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:19.787 [2024-11-02 23:51:13.681714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:19.787 187.00 IOPS, 561.00 MiB/s [2024-11-02T23:51:13.882Z] [2024-11-02 23:51:13.798879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:19.787 [2024-11-02 23:51:13.799066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:20.046 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.046 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.046 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.046 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.046 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.046 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.046 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.046 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.046 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.046 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.305 [2024-11-02 23:51:14.144475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:20.306 [2024-11-02 23:51:14.144842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.306 "name": "raid_bdev1", 00:11:20.306 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:20.306 "strip_size_kb": 0, 00:11:20.306 "state": "online", 00:11:20.306 "raid_level": "raid1", 00:11:20.306 "superblock": true, 00:11:20.306 "num_base_bdevs": 2, 00:11:20.306 "num_base_bdevs_discovered": 2, 00:11:20.306 "num_base_bdevs_operational": 2, 00:11:20.306 "process": { 00:11:20.306 "type": "rebuild", 00:11:20.306 "target": "spare", 00:11:20.306 "progress": { 00:11:20.306 "blocks": 12288, 00:11:20.306 "percent": 19 00:11:20.306 } 00:11:20.306 }, 00:11:20.306 "base_bdevs_list": [ 00:11:20.306 { 00:11:20.306 "name": "spare", 00:11:20.306 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:20.306 "is_configured": true, 00:11:20.306 "data_offset": 2048, 00:11:20.306 "data_size": 63488 00:11:20.306 }, 00:11:20.306 { 00:11:20.306 "name": "BaseBdev2", 00:11:20.306 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:20.306 "is_configured": true, 00:11:20.306 "data_offset": 2048, 00:11:20.306 "data_size": 63488 00:11:20.306 } 00:11:20.306 ] 00:11:20.306 }' 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.306 [2024-11-02 23:51:14.267488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:20.306 [2024-11-02 23:51:14.351787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:20.306 [2024-11-02 23:51:14.352019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:20.306 [2024-11-02 23:51:14.358130] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:20.306 [2024-11-02 23:51:14.365343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.306 [2024-11-02 23:51:14.365390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:20.306 [2024-11-02 23:51:14.365404] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:20.306 [2024-11-02 23:51:14.376915] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.306 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.647 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.647 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.647 "name": "raid_bdev1", 00:11:20.647 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:20.647 "strip_size_kb": 0, 00:11:20.647 "state": "online", 00:11:20.647 "raid_level": "raid1", 00:11:20.647 "superblock": true, 00:11:20.647 "num_base_bdevs": 2, 00:11:20.647 "num_base_bdevs_discovered": 1, 00:11:20.647 "num_base_bdevs_operational": 1, 00:11:20.647 "base_bdevs_list": [ 00:11:20.647 { 00:11:20.647 "name": null, 00:11:20.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.647 "is_configured": false, 00:11:20.647 "data_offset": 0, 00:11:20.647 "data_size": 63488 00:11:20.647 }, 00:11:20.647 { 00:11:20.647 "name": "BaseBdev2", 00:11:20.647 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:20.647 "is_configured": true, 00:11:20.647 "data_offset": 2048, 00:11:20.647 "data_size": 63488 00:11:20.647 } 00:11:20.647 ] 00:11:20.647 }' 00:11:20.647 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.647 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.905 181.00 IOPS, 543.00 MiB/s [2024-11-02T23:51:15.000Z] 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.905 "name": "raid_bdev1", 00:11:20.905 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:20.905 "strip_size_kb": 0, 00:11:20.905 "state": "online", 00:11:20.905 "raid_level": "raid1", 00:11:20.905 "superblock": true, 00:11:20.905 "num_base_bdevs": 2, 00:11:20.905 "num_base_bdevs_discovered": 1, 00:11:20.905 "num_base_bdevs_operational": 1, 00:11:20.905 "base_bdevs_list": [ 00:11:20.905 { 00:11:20.905 "name": null, 00:11:20.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.905 "is_configured": false, 00:11:20.905 "data_offset": 0, 00:11:20.905 "data_size": 63488 00:11:20.905 }, 00:11:20.905 { 00:11:20.905 "name": "BaseBdev2", 00:11:20.905 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:20.905 "is_configured": true, 00:11:20.905 "data_offset": 2048, 00:11:20.905 "data_size": 63488 00:11:20.905 } 00:11:20.905 ] 00:11:20.905 }' 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.905 23:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.905 [2024-11-02 23:51:14.985112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:21.165 23:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.165 23:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:21.165 [2024-11-02 23:51:15.023900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:21.165 [2024-11-02 23:51:15.025796] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:21.165 [2024-11-02 23:51:15.143994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:21.165 [2024-11-02 23:51:15.144296] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:21.165 [2024-11-02 23:51:15.251550] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:21.165 [2024-11-02 23:51:15.251794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:21.736 [2024-11-02 23:51:15.696370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:21.736 [2024-11-02 23:51:15.696627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:21.997 175.00 IOPS, 525.00 MiB/s [2024-11-02T23:51:16.092Z] [2024-11-02 23:51:15.918279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:21.997 [2024-11-02 23:51:15.918651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.997 "name": "raid_bdev1", 00:11:21.997 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:21.997 "strip_size_kb": 0, 00:11:21.997 "state": "online", 00:11:21.997 "raid_level": "raid1", 00:11:21.997 "superblock": true, 00:11:21.997 "num_base_bdevs": 2, 00:11:21.997 "num_base_bdevs_discovered": 2, 00:11:21.997 "num_base_bdevs_operational": 2, 00:11:21.997 "process": { 00:11:21.997 "type": "rebuild", 00:11:21.997 "target": "spare", 00:11:21.997 "progress": { 00:11:21.997 "blocks": 14336, 00:11:21.997 "percent": 22 00:11:21.997 } 00:11:21.997 }, 00:11:21.997 "base_bdevs_list": [ 00:11:21.997 { 00:11:21.997 "name": "spare", 00:11:21.997 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:21.997 "is_configured": true, 00:11:21.997 "data_offset": 2048, 00:11:21.997 "data_size": 63488 00:11:21.997 }, 00:11:21.997 { 00:11:21.997 "name": "BaseBdev2", 00:11:21.997 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:21.997 "is_configured": true, 00:11:21.997 "data_offset": 2048, 00:11:21.997 "data_size": 63488 00:11:21.997 } 00:11:21.997 ] 00:11:21.997 }' 00:11:21.997 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:22.256 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=335 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.256 [2024-11-02 23:51:16.153821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.256 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.257 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.257 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.257 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.257 "name": "raid_bdev1", 00:11:22.257 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:22.257 "strip_size_kb": 0, 00:11:22.257 "state": "online", 00:11:22.257 "raid_level": "raid1", 00:11:22.257 "superblock": true, 00:11:22.257 "num_base_bdevs": 2, 00:11:22.257 "num_base_bdevs_discovered": 2, 00:11:22.257 "num_base_bdevs_operational": 2, 00:11:22.257 "process": { 00:11:22.257 "type": "rebuild", 00:11:22.257 "target": "spare", 00:11:22.257 "progress": { 00:11:22.257 "blocks": 16384, 00:11:22.257 "percent": 25 00:11:22.257 } 00:11:22.257 }, 00:11:22.257 "base_bdevs_list": [ 00:11:22.257 { 00:11:22.257 "name": "spare", 00:11:22.257 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:22.257 "is_configured": true, 00:11:22.257 "data_offset": 2048, 00:11:22.257 "data_size": 63488 00:11:22.257 }, 00:11:22.257 { 00:11:22.257 "name": "BaseBdev2", 00:11:22.257 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:22.257 "is_configured": true, 00:11:22.257 "data_offset": 2048, 00:11:22.257 "data_size": 63488 00:11:22.257 } 00:11:22.257 ] 00:11:22.257 }' 00:11:22.257 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.257 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:22.257 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.257 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:22.257 23:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:22.516 [2024-11-02 23:51:16.581408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:22.776 146.00 IOPS, 438.00 MiB/s [2024-11-02T23:51:16.871Z] [2024-11-02 23:51:16.814903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:23.036 [2024-11-02 23:51:16.942318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.295 [2024-11-02 23:51:17.304822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.295 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.295 "name": "raid_bdev1", 00:11:23.295 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:23.295 "strip_size_kb": 0, 00:11:23.295 "state": "online", 00:11:23.295 "raid_level": "raid1", 00:11:23.295 "superblock": true, 00:11:23.295 "num_base_bdevs": 2, 00:11:23.295 "num_base_bdevs_discovered": 2, 00:11:23.295 "num_base_bdevs_operational": 2, 00:11:23.295 "process": { 00:11:23.295 "type": "rebuild", 00:11:23.295 "target": "spare", 00:11:23.295 "progress": { 00:11:23.295 "blocks": 30720, 00:11:23.295 "percent": 48 00:11:23.296 } 00:11:23.296 }, 00:11:23.296 "base_bdevs_list": [ 00:11:23.296 { 00:11:23.296 "name": "spare", 00:11:23.296 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:23.296 "is_configured": true, 00:11:23.296 "data_offset": 2048, 00:11:23.296 "data_size": 63488 00:11:23.296 }, 00:11:23.296 { 00:11:23.296 "name": "BaseBdev2", 00:11:23.296 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:23.296 "is_configured": true, 00:11:23.296 "data_offset": 2048, 00:11:23.296 "data_size": 63488 00:11:23.296 } 00:11:23.296 ] 00:11:23.296 }' 00:11:23.296 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.296 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:23.296 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.553 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:23.553 23:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:24.087 128.40 IOPS, 385.20 MiB/s [2024-11-02T23:51:18.182Z] [2024-11-02 23:51:18.080664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:24.354 [2024-11-02 23:51:18.395495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.354 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.613 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:24.613 "name": "raid_bdev1", 00:11:24.613 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:24.613 "strip_size_kb": 0, 00:11:24.613 "state": "online", 00:11:24.613 "raid_level": "raid1", 00:11:24.613 "superblock": true, 00:11:24.613 "num_base_bdevs": 2, 00:11:24.613 "num_base_bdevs_discovered": 2, 00:11:24.613 "num_base_bdevs_operational": 2, 00:11:24.613 "process": { 00:11:24.613 "type": "rebuild", 00:11:24.613 "target": "spare", 00:11:24.613 "progress": { 00:11:24.613 "blocks": 51200, 00:11:24.613 "percent": 80 00:11:24.613 } 00:11:24.613 }, 00:11:24.613 "base_bdevs_list": [ 00:11:24.613 { 00:11:24.613 "name": "spare", 00:11:24.613 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:24.613 "is_configured": true, 00:11:24.613 "data_offset": 2048, 00:11:24.613 "data_size": 63488 00:11:24.613 }, 00:11:24.613 { 00:11:24.613 "name": "BaseBdev2", 00:11:24.613 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:24.613 "is_configured": true, 00:11:24.613 "data_offset": 2048, 00:11:24.613 "data_size": 63488 00:11:24.613 } 00:11:24.613 ] 00:11:24.613 }' 00:11:24.613 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.613 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:24.613 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.613 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:24.613 23:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:24.613 [2024-11-02 23:51:18.602723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:25.442 114.67 IOPS, 344.00 MiB/s [2024-11-02T23:51:19.537Z] [2024-11-02 23:51:19.243900] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:25.442 [2024-11-02 23:51:19.349180] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:25.442 [2024-11-02 23:51:19.351616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.701 "name": "raid_bdev1", 00:11:25.701 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:25.701 "strip_size_kb": 0, 00:11:25.701 "state": "online", 00:11:25.701 "raid_level": "raid1", 00:11:25.701 "superblock": true, 00:11:25.701 "num_base_bdevs": 2, 00:11:25.701 "num_base_bdevs_discovered": 2, 00:11:25.701 "num_base_bdevs_operational": 2, 00:11:25.701 "base_bdevs_list": [ 00:11:25.701 { 00:11:25.701 "name": "spare", 00:11:25.701 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:25.701 "is_configured": true, 00:11:25.701 "data_offset": 2048, 00:11:25.701 "data_size": 63488 00:11:25.701 }, 00:11:25.701 { 00:11:25.701 "name": "BaseBdev2", 00:11:25.701 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:25.701 "is_configured": true, 00:11:25.701 "data_offset": 2048, 00:11:25.701 "data_size": 63488 00:11:25.701 } 00:11:25.701 ] 00:11:25.701 }' 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.701 103.00 IOPS, 309.00 MiB/s [2024-11-02T23:51:19.796Z] 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.701 "name": "raid_bdev1", 00:11:25.701 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:25.701 "strip_size_kb": 0, 00:11:25.701 "state": "online", 00:11:25.701 "raid_level": "raid1", 00:11:25.701 "superblock": true, 00:11:25.701 "num_base_bdevs": 2, 00:11:25.701 "num_base_bdevs_discovered": 2, 00:11:25.701 "num_base_bdevs_operational": 2, 00:11:25.701 "base_bdevs_list": [ 00:11:25.701 { 00:11:25.701 "name": "spare", 00:11:25.701 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:25.701 "is_configured": true, 00:11:25.701 "data_offset": 2048, 00:11:25.701 "data_size": 63488 00:11:25.701 }, 00:11:25.701 { 00:11:25.701 "name": "BaseBdev2", 00:11:25.701 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:25.701 "is_configured": true, 00:11:25.701 "data_offset": 2048, 00:11:25.701 "data_size": 63488 00:11:25.701 } 00:11:25.701 ] 00:11:25.701 }' 00:11:25.701 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.959 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.960 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.960 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.960 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.960 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.960 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.960 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.960 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.960 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.960 "name": "raid_bdev1", 00:11:25.960 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:25.960 "strip_size_kb": 0, 00:11:25.960 "state": "online", 00:11:25.960 "raid_level": "raid1", 00:11:25.960 "superblock": true, 00:11:25.960 "num_base_bdevs": 2, 00:11:25.960 "num_base_bdevs_discovered": 2, 00:11:25.960 "num_base_bdevs_operational": 2, 00:11:25.960 "base_bdevs_list": [ 00:11:25.960 { 00:11:25.960 "name": "spare", 00:11:25.960 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:25.960 "is_configured": true, 00:11:25.960 "data_offset": 2048, 00:11:25.960 "data_size": 63488 00:11:25.960 }, 00:11:25.960 { 00:11:25.960 "name": "BaseBdev2", 00:11:25.960 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:25.960 "is_configured": true, 00:11:25.960 "data_offset": 2048, 00:11:25.960 "data_size": 63488 00:11:25.960 } 00:11:25.960 ] 00:11:25.960 }' 00:11:25.960 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.960 23:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.573 [2024-11-02 23:51:20.341596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.573 [2024-11-02 23:51:20.341634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.573 00:11:26.573 Latency(us) 00:11:26.573 [2024-11-02T23:51:20.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.573 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:26.573 raid_bdev1 : 7.72 96.38 289.14 0.00 0.00 14122.80 275.45 108978.64 00:11:26.573 [2024-11-02T23:51:20.668Z] =================================================================================================================== 00:11:26.573 [2024-11-02T23:51:20.668Z] Total : 96.38 289.14 0.00 0.00 14122.80 275.45 108978.64 00:11:26.573 [2024-11-02 23:51:20.425319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.573 [2024-11-02 23:51:20.425379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.573 [2024-11-02 23:51:20.425467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.573 [2024-11-02 23:51:20.425480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:26.573 { 00:11:26.573 "results": [ 00:11:26.573 { 00:11:26.573 "job": "raid_bdev1", 00:11:26.573 "core_mask": "0x1", 00:11:26.573 "workload": "randrw", 00:11:26.573 "percentage": 50, 00:11:26.573 "status": "finished", 00:11:26.573 "queue_depth": 2, 00:11:26.573 "io_size": 3145728, 00:11:26.573 "runtime": 7.719393, 00:11:26.573 "iops": 96.38063510952222, 00:11:26.573 "mibps": 289.1419053285666, 00:11:26.573 "io_failed": 0, 00:11:26.573 "io_timeout": 0, 00:11:26.573 "avg_latency_us": 14122.803700051649, 00:11:26.573 "min_latency_us": 275.45152838427947, 00:11:26.573 "max_latency_us": 108978.64104803493 00:11:26.573 } 00:11:26.573 ], 00:11:26.573 "core_count": 1 00:11:26.573 } 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:26.573 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:26.834 /dev/nbd0 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.834 1+0 records in 00:11:26.834 1+0 records out 00:11:26.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492517 s, 8.3 MB/s 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:26.834 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:27.093 /dev/nbd1 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.093 1+0 records in 00:11:27.093 1+0 records out 00:11:27.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209364 s, 19.6 MB/s 00:11:27.093 23:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:27.093 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:27.094 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.094 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.353 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.612 [2024-11-02 23:51:21.565715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:27.612 [2024-11-02 23:51:21.565775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.612 [2024-11-02 23:51:21.565797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:27.612 [2024-11-02 23:51:21.565805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.612 [2024-11-02 23:51:21.568029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.612 [2024-11-02 23:51:21.568065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:27.612 [2024-11-02 23:51:21.568167] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:27.612 [2024-11-02 23:51:21.568208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:27.612 [2024-11-02 23:51:21.568311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.612 spare 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.612 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.613 [2024-11-02 23:51:21.668222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:27.613 [2024-11-02 23:51:21.668262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:27.613 [2024-11-02 23:51:21.668535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:11:27.613 [2024-11-02 23:51:21.668685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:27.613 [2024-11-02 23:51:21.668702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:27.613 [2024-11-02 23:51:21.668873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.613 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.872 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.872 "name": "raid_bdev1", 00:11:27.872 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:27.873 "strip_size_kb": 0, 00:11:27.873 "state": "online", 00:11:27.873 "raid_level": "raid1", 00:11:27.873 "superblock": true, 00:11:27.873 "num_base_bdevs": 2, 00:11:27.873 "num_base_bdevs_discovered": 2, 00:11:27.873 "num_base_bdevs_operational": 2, 00:11:27.873 "base_bdevs_list": [ 00:11:27.873 { 00:11:27.873 "name": "spare", 00:11:27.873 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:27.873 "is_configured": true, 00:11:27.873 "data_offset": 2048, 00:11:27.873 "data_size": 63488 00:11:27.873 }, 00:11:27.873 { 00:11:27.873 "name": "BaseBdev2", 00:11:27.873 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:27.873 "is_configured": true, 00:11:27.873 "data_offset": 2048, 00:11:27.873 "data_size": 63488 00:11:27.873 } 00:11:27.873 ] 00:11:27.873 }' 00:11:27.873 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.873 23:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.131 "name": "raid_bdev1", 00:11:28.131 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:28.131 "strip_size_kb": 0, 00:11:28.131 "state": "online", 00:11:28.131 "raid_level": "raid1", 00:11:28.131 "superblock": true, 00:11:28.131 "num_base_bdevs": 2, 00:11:28.131 "num_base_bdevs_discovered": 2, 00:11:28.131 "num_base_bdevs_operational": 2, 00:11:28.131 "base_bdevs_list": [ 00:11:28.131 { 00:11:28.131 "name": "spare", 00:11:28.131 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:28.131 "is_configured": true, 00:11:28.131 "data_offset": 2048, 00:11:28.131 "data_size": 63488 00:11:28.131 }, 00:11:28.131 { 00:11:28.131 "name": "BaseBdev2", 00:11:28.131 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:28.131 "is_configured": true, 00:11:28.131 "data_offset": 2048, 00:11:28.131 "data_size": 63488 00:11:28.131 } 00:11:28.131 ] 00:11:28.131 }' 00:11:28.131 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.390 [2024-11-02 23:51:22.348515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.390 "name": "raid_bdev1", 00:11:28.390 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:28.390 "strip_size_kb": 0, 00:11:28.390 "state": "online", 00:11:28.390 "raid_level": "raid1", 00:11:28.390 "superblock": true, 00:11:28.390 "num_base_bdevs": 2, 00:11:28.390 "num_base_bdevs_discovered": 1, 00:11:28.390 "num_base_bdevs_operational": 1, 00:11:28.390 "base_bdevs_list": [ 00:11:28.390 { 00:11:28.390 "name": null, 00:11:28.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.390 "is_configured": false, 00:11:28.390 "data_offset": 0, 00:11:28.390 "data_size": 63488 00:11:28.390 }, 00:11:28.390 { 00:11:28.390 "name": "BaseBdev2", 00:11:28.390 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:28.390 "is_configured": true, 00:11:28.390 "data_offset": 2048, 00:11:28.390 "data_size": 63488 00:11:28.390 } 00:11:28.390 ] 00:11:28.390 }' 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.390 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.961 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:28.961 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.961 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.961 [2024-11-02 23:51:22.823790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.961 [2024-11-02 23:51:22.823977] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:28.961 [2024-11-02 23:51:22.824000] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:28.961 [2024-11-02 23:51:22.824036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.961 [2024-11-02 23:51:22.829314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:11:28.961 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.961 23:51:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:28.961 [2024-11-02 23:51:22.831227] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.899 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.899 "name": "raid_bdev1", 00:11:29.899 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:29.899 "strip_size_kb": 0, 00:11:29.899 "state": "online", 00:11:29.899 "raid_level": "raid1", 00:11:29.899 "superblock": true, 00:11:29.899 "num_base_bdevs": 2, 00:11:29.899 "num_base_bdevs_discovered": 2, 00:11:29.899 "num_base_bdevs_operational": 2, 00:11:29.899 "process": { 00:11:29.899 "type": "rebuild", 00:11:29.899 "target": "spare", 00:11:29.899 "progress": { 00:11:29.899 "blocks": 20480, 00:11:29.899 "percent": 32 00:11:29.899 } 00:11:29.899 }, 00:11:29.899 "base_bdevs_list": [ 00:11:29.899 { 00:11:29.899 "name": "spare", 00:11:29.899 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:29.899 "is_configured": true, 00:11:29.899 "data_offset": 2048, 00:11:29.899 "data_size": 63488 00:11:29.899 }, 00:11:29.899 { 00:11:29.899 "name": "BaseBdev2", 00:11:29.899 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:29.899 "is_configured": true, 00:11:29.899 "data_offset": 2048, 00:11:29.900 "data_size": 63488 00:11:29.900 } 00:11:29.900 ] 00:11:29.900 }' 00:11:29.900 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.900 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:29.900 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.900 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:29.900 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:29.900 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.900 23:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.900 [2024-11-02 23:51:23.967775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:30.159 [2024-11-02 23:51:24.035587] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:30.159 [2024-11-02 23:51:24.035645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.159 [2024-11-02 23:51:24.035676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:30.159 [2024-11-02 23:51:24.035685] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.159 "name": "raid_bdev1", 00:11:30.159 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:30.159 "strip_size_kb": 0, 00:11:30.159 "state": "online", 00:11:30.159 "raid_level": "raid1", 00:11:30.159 "superblock": true, 00:11:30.159 "num_base_bdevs": 2, 00:11:30.159 "num_base_bdevs_discovered": 1, 00:11:30.159 "num_base_bdevs_operational": 1, 00:11:30.159 "base_bdevs_list": [ 00:11:30.159 { 00:11:30.159 "name": null, 00:11:30.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.159 "is_configured": false, 00:11:30.159 "data_offset": 0, 00:11:30.159 "data_size": 63488 00:11:30.159 }, 00:11:30.159 { 00:11:30.159 "name": "BaseBdev2", 00:11:30.159 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:30.159 "is_configured": true, 00:11:30.159 "data_offset": 2048, 00:11:30.159 "data_size": 63488 00:11:30.159 } 00:11:30.159 ] 00:11:30.159 }' 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.159 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.728 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:30.728 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.728 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.728 [2024-11-02 23:51:24.523811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:30.728 [2024-11-02 23:51:24.523875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.728 [2024-11-02 23:51:24.523900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.728 [2024-11-02 23:51:24.523913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.728 [2024-11-02 23:51:24.524376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.728 [2024-11-02 23:51:24.524404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:30.728 [2024-11-02 23:51:24.524503] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:30.728 [2024-11-02 23:51:24.524522] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:30.728 [2024-11-02 23:51:24.524532] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:30.728 [2024-11-02 23:51:24.524575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:30.728 [2024-11-02 23:51:24.529677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:11:30.728 spare 00:11:30.728 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.728 23:51:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:30.728 [2024-11-02 23:51:24.531610] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.665 "name": "raid_bdev1", 00:11:31.665 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:31.665 "strip_size_kb": 0, 00:11:31.665 "state": "online", 00:11:31.665 "raid_level": "raid1", 00:11:31.665 "superblock": true, 00:11:31.665 "num_base_bdevs": 2, 00:11:31.665 "num_base_bdevs_discovered": 2, 00:11:31.665 "num_base_bdevs_operational": 2, 00:11:31.665 "process": { 00:11:31.665 "type": "rebuild", 00:11:31.665 "target": "spare", 00:11:31.665 "progress": { 00:11:31.665 "blocks": 20480, 00:11:31.665 "percent": 32 00:11:31.665 } 00:11:31.665 }, 00:11:31.665 "base_bdevs_list": [ 00:11:31.665 { 00:11:31.665 "name": "spare", 00:11:31.665 "uuid": "556c3f3b-35b3-5cc2-9a5c-6d4485988426", 00:11:31.665 "is_configured": true, 00:11:31.665 "data_offset": 2048, 00:11:31.665 "data_size": 63488 00:11:31.665 }, 00:11:31.665 { 00:11:31.665 "name": "BaseBdev2", 00:11:31.665 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:31.665 "is_configured": true, 00:11:31.665 "data_offset": 2048, 00:11:31.665 "data_size": 63488 00:11:31.665 } 00:11:31.665 ] 00:11:31.665 }' 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.665 [2024-11-02 23:51:25.695982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:31.665 [2024-11-02 23:51:25.735814] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:31.665 [2024-11-02 23:51:25.735866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.665 [2024-11-02 23:51:25.735898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:31.665 [2024-11-02 23:51:25.735905] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.665 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.924 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.924 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.924 "name": "raid_bdev1", 00:11:31.924 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:31.924 "strip_size_kb": 0, 00:11:31.924 "state": "online", 00:11:31.924 "raid_level": "raid1", 00:11:31.924 "superblock": true, 00:11:31.924 "num_base_bdevs": 2, 00:11:31.924 "num_base_bdevs_discovered": 1, 00:11:31.924 "num_base_bdevs_operational": 1, 00:11:31.924 "base_bdevs_list": [ 00:11:31.924 { 00:11:31.924 "name": null, 00:11:31.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.924 "is_configured": false, 00:11:31.924 "data_offset": 0, 00:11:31.924 "data_size": 63488 00:11:31.924 }, 00:11:31.924 { 00:11:31.924 "name": "BaseBdev2", 00:11:31.924 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:31.924 "is_configured": true, 00:11:31.924 "data_offset": 2048, 00:11:31.924 "data_size": 63488 00:11:31.924 } 00:11:31.924 ] 00:11:31.924 }' 00:11:31.924 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.924 23:51:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.183 "name": "raid_bdev1", 00:11:32.183 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:32.183 "strip_size_kb": 0, 00:11:32.183 "state": "online", 00:11:32.183 "raid_level": "raid1", 00:11:32.183 "superblock": true, 00:11:32.183 "num_base_bdevs": 2, 00:11:32.183 "num_base_bdevs_discovered": 1, 00:11:32.183 "num_base_bdevs_operational": 1, 00:11:32.183 "base_bdevs_list": [ 00:11:32.183 { 00:11:32.183 "name": null, 00:11:32.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.183 "is_configured": false, 00:11:32.183 "data_offset": 0, 00:11:32.183 "data_size": 63488 00:11:32.183 }, 00:11:32.183 { 00:11:32.183 "name": "BaseBdev2", 00:11:32.183 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:32.183 "is_configured": true, 00:11:32.183 "data_offset": 2048, 00:11:32.183 "data_size": 63488 00:11:32.183 } 00:11:32.183 ] 00:11:32.183 }' 00:11:32.183 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.452 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.453 [2024-11-02 23:51:26.339742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:32.453 [2024-11-02 23:51:26.339804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.453 [2024-11-02 23:51:26.339825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:32.453 [2024-11-02 23:51:26.339834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.453 [2024-11-02 23:51:26.340209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.453 [2024-11-02 23:51:26.340234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:32.453 [2024-11-02 23:51:26.340305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:32.453 [2024-11-02 23:51:26.340333] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:32.453 [2024-11-02 23:51:26.340343] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:32.453 [2024-11-02 23:51:26.340354] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:32.453 BaseBdev1 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.453 23:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.406 "name": "raid_bdev1", 00:11:33.406 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:33.406 "strip_size_kb": 0, 00:11:33.406 "state": "online", 00:11:33.406 "raid_level": "raid1", 00:11:33.406 "superblock": true, 00:11:33.406 "num_base_bdevs": 2, 00:11:33.406 "num_base_bdevs_discovered": 1, 00:11:33.406 "num_base_bdevs_operational": 1, 00:11:33.406 "base_bdevs_list": [ 00:11:33.406 { 00:11:33.406 "name": null, 00:11:33.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.406 "is_configured": false, 00:11:33.406 "data_offset": 0, 00:11:33.406 "data_size": 63488 00:11:33.406 }, 00:11:33.406 { 00:11:33.406 "name": "BaseBdev2", 00:11:33.406 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:33.406 "is_configured": true, 00:11:33.406 "data_offset": 2048, 00:11:33.406 "data_size": 63488 00:11:33.406 } 00:11:33.406 ] 00:11:33.406 }' 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.406 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:33.976 "name": "raid_bdev1", 00:11:33.976 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:33.976 "strip_size_kb": 0, 00:11:33.976 "state": "online", 00:11:33.976 "raid_level": "raid1", 00:11:33.976 "superblock": true, 00:11:33.976 "num_base_bdevs": 2, 00:11:33.976 "num_base_bdevs_discovered": 1, 00:11:33.976 "num_base_bdevs_operational": 1, 00:11:33.976 "base_bdevs_list": [ 00:11:33.976 { 00:11:33.976 "name": null, 00:11:33.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.976 "is_configured": false, 00:11:33.976 "data_offset": 0, 00:11:33.976 "data_size": 63488 00:11:33.976 }, 00:11:33.976 { 00:11:33.976 "name": "BaseBdev2", 00:11:33.976 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:33.976 "is_configured": true, 00:11:33.976 "data_offset": 2048, 00:11:33.976 "data_size": 63488 00:11:33.976 } 00:11:33.976 ] 00:11:33.976 }' 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.976 [2024-11-02 23:51:27.913363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.976 [2024-11-02 23:51:27.913550] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:33.976 [2024-11-02 23:51:27.913572] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:33.976 request: 00:11:33.976 { 00:11:33.976 "base_bdev": "BaseBdev1", 00:11:33.976 "raid_bdev": "raid_bdev1", 00:11:33.976 "method": "bdev_raid_add_base_bdev", 00:11:33.976 "req_id": 1 00:11:33.976 } 00:11:33.976 Got JSON-RPC error response 00:11:33.976 response: 00:11:33.976 { 00:11:33.976 "code": -22, 00:11:33.976 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:33.976 } 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:33.976 23:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.912 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.913 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.913 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.913 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.913 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.913 "name": "raid_bdev1", 00:11:34.913 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:34.913 "strip_size_kb": 0, 00:11:34.913 "state": "online", 00:11:34.913 "raid_level": "raid1", 00:11:34.913 "superblock": true, 00:11:34.913 "num_base_bdevs": 2, 00:11:34.913 "num_base_bdevs_discovered": 1, 00:11:34.913 "num_base_bdevs_operational": 1, 00:11:34.913 "base_bdevs_list": [ 00:11:34.913 { 00:11:34.913 "name": null, 00:11:34.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.913 "is_configured": false, 00:11:34.913 "data_offset": 0, 00:11:34.913 "data_size": 63488 00:11:34.913 }, 00:11:34.913 { 00:11:34.913 "name": "BaseBdev2", 00:11:34.913 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:34.913 "is_configured": true, 00:11:34.913 "data_offset": 2048, 00:11:34.913 "data_size": 63488 00:11:34.913 } 00:11:34.913 ] 00:11:34.913 }' 00:11:34.913 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.913 23:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.480 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:35.480 "name": "raid_bdev1", 00:11:35.480 "uuid": "f2bdfc88-e817-4c02-8231-ad529023888d", 00:11:35.481 "strip_size_kb": 0, 00:11:35.481 "state": "online", 00:11:35.481 "raid_level": "raid1", 00:11:35.481 "superblock": true, 00:11:35.481 "num_base_bdevs": 2, 00:11:35.481 "num_base_bdevs_discovered": 1, 00:11:35.481 "num_base_bdevs_operational": 1, 00:11:35.481 "base_bdevs_list": [ 00:11:35.481 { 00:11:35.481 "name": null, 00:11:35.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.481 "is_configured": false, 00:11:35.481 "data_offset": 0, 00:11:35.481 "data_size": 63488 00:11:35.481 }, 00:11:35.481 { 00:11:35.481 "name": "BaseBdev2", 00:11:35.481 "uuid": "383dc576-025c-57d0-b1a8-56ab612e862e", 00:11:35.481 "is_configured": true, 00:11:35.481 "data_offset": 2048, 00:11:35.481 "data_size": 63488 00:11:35.481 } 00:11:35.481 ] 00:11:35.481 }' 00:11:35.481 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:35.481 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:35.481 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:35.481 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:35.481 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87323 00:11:35.481 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 87323 ']' 00:11:35.481 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 87323 00:11:35.481 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:11:35.481 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:35.481 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87323 00:11:35.744 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:35.744 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:35.744 killing process with pid 87323 00:11:35.744 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87323' 00:11:35.744 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 87323 00:11:35.744 Received shutdown signal, test time was about 16.893280 seconds 00:11:35.744 00:11:35.744 Latency(us) 00:11:35.744 [2024-11-02T23:51:29.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.744 [2024-11-02T23:51:29.839Z] =================================================================================================================== 00:11:35.744 [2024-11-02T23:51:29.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:35.744 [2024-11-02 23:51:29.577584] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.744 [2024-11-02 23:51:29.577715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.744 [2024-11-02 23:51:29.577789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.744 [2024-11-02 23:51:29.577804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:35.744 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 87323 00:11:35.744 [2024-11-02 23:51:29.604416] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.744 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:35.744 00:11:35.744 real 0m18.757s 00:11:35.744 user 0m25.109s 00:11:35.744 sys 0m2.212s 00:11:35.744 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:35.744 23:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.744 ************************************ 00:11:35.744 END TEST raid_rebuild_test_sb_io 00:11:35.744 ************************************ 00:11:36.003 23:51:29 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:36.003 23:51:29 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:11:36.003 23:51:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:11:36.003 23:51:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.003 23:51:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.003 ************************************ 00:11:36.003 START TEST raid_rebuild_test 00:11:36.003 ************************************ 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87995 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87995 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 87995 ']' 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:36.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:36.003 23:51:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.003 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:36.003 Zero copy mechanism will not be used. 00:11:36.003 [2024-11-02 23:51:29.982224] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:11:36.003 [2024-11-02 23:51:29.982339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87995 ] 00:11:36.261 [2024-11-02 23:51:30.136326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.261 [2024-11-02 23:51:30.162026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.261 [2024-11-02 23:51:30.203743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.261 [2024-11-02 23:51:30.203794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.830 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:36.830 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:11:36.830 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.830 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.830 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.831 BaseBdev1_malloc 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.831 [2024-11-02 23:51:30.821081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:36.831 [2024-11-02 23:51:30.821141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.831 [2024-11-02 23:51:30.821187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:36.831 [2024-11-02 23:51:30.821202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.831 [2024-11-02 23:51:30.823306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.831 [2024-11-02 23:51:30.823341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.831 BaseBdev1 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.831 BaseBdev2_malloc 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.831 [2024-11-02 23:51:30.849411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:36.831 [2024-11-02 23:51:30.849456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.831 [2024-11-02 23:51:30.849476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:36.831 [2024-11-02 23:51:30.849485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.831 [2024-11-02 23:51:30.851603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.831 [2024-11-02 23:51:30.851642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.831 BaseBdev2 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.831 BaseBdev3_malloc 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.831 [2024-11-02 23:51:30.877786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:36.831 [2024-11-02 23:51:30.877833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.831 [2024-11-02 23:51:30.877856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:36.831 [2024-11-02 23:51:30.877865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.831 [2024-11-02 23:51:30.879878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.831 [2024-11-02 23:51:30.879909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:36.831 BaseBdev3 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.831 BaseBdev4_malloc 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.831 [2024-11-02 23:51:30.916957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:36.831 [2024-11-02 23:51:30.917010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.831 [2024-11-02 23:51:30.917034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:36.831 [2024-11-02 23:51:30.917045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.831 [2024-11-02 23:51:30.919192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.831 [2024-11-02 23:51:30.919226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:36.831 BaseBdev4 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.831 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.091 spare_malloc 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.091 spare_delay 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.091 [2024-11-02 23:51:30.957205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:37.091 [2024-11-02 23:51:30.957247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.091 [2024-11-02 23:51:30.957279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:37.091 [2024-11-02 23:51:30.957287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.091 [2024-11-02 23:51:30.959386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.091 [2024-11-02 23:51:30.959421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:37.091 spare 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.091 [2024-11-02 23:51:30.969247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.091 [2024-11-02 23:51:30.971095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.091 [2024-11-02 23:51:30.971158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.091 [2024-11-02 23:51:30.971204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:37.091 [2024-11-02 23:51:30.971277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:37.091 [2024-11-02 23:51:30.971293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:37.091 [2024-11-02 23:51:30.971541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:37.091 [2024-11-02 23:51:30.971673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:37.091 [2024-11-02 23:51:30.971694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:37.091 [2024-11-02 23:51:30.971833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.091 23:51:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.091 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.091 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.091 "name": "raid_bdev1", 00:11:37.091 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:37.091 "strip_size_kb": 0, 00:11:37.091 "state": "online", 00:11:37.091 "raid_level": "raid1", 00:11:37.091 "superblock": false, 00:11:37.091 "num_base_bdevs": 4, 00:11:37.091 "num_base_bdevs_discovered": 4, 00:11:37.091 "num_base_bdevs_operational": 4, 00:11:37.091 "base_bdevs_list": [ 00:11:37.091 { 00:11:37.091 "name": "BaseBdev1", 00:11:37.091 "uuid": "6ecd2ec0-a51f-5cd3-9a49-0d737e39024a", 00:11:37.091 "is_configured": true, 00:11:37.091 "data_offset": 0, 00:11:37.091 "data_size": 65536 00:11:37.091 }, 00:11:37.091 { 00:11:37.091 "name": "BaseBdev2", 00:11:37.091 "uuid": "d40b514d-f32b-5003-96cd-7ab6de016819", 00:11:37.091 "is_configured": true, 00:11:37.091 "data_offset": 0, 00:11:37.091 "data_size": 65536 00:11:37.091 }, 00:11:37.091 { 00:11:37.091 "name": "BaseBdev3", 00:11:37.091 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:37.091 "is_configured": true, 00:11:37.091 "data_offset": 0, 00:11:37.092 "data_size": 65536 00:11:37.092 }, 00:11:37.092 { 00:11:37.092 "name": "BaseBdev4", 00:11:37.092 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:37.092 "is_configured": true, 00:11:37.092 "data_offset": 0, 00:11:37.092 "data_size": 65536 00:11:37.092 } 00:11:37.092 ] 00:11:37.092 }' 00:11:37.092 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.092 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.350 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.350 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.350 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:37.350 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.350 [2024-11-02 23:51:31.424785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:37.610 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:37.869 [2024-11-02 23:51:31.711997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:37.869 /dev/nbd0 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.870 1+0 records in 00:11:37.870 1+0 records out 00:11:37.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327128 s, 12.5 MB/s 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:37.870 23:51:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:43.196 65536+0 records in 00:11:43.196 65536+0 records out 00:11:43.196 33554432 bytes (34 MB, 32 MiB) copied, 4.87666 s, 6.9 MB/s 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:43.196 [2024-11-02 23:51:36.878816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.196 [2024-11-02 23:51:36.898878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.196 "name": "raid_bdev1", 00:11:43.196 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:43.196 "strip_size_kb": 0, 00:11:43.196 "state": "online", 00:11:43.196 "raid_level": "raid1", 00:11:43.196 "superblock": false, 00:11:43.196 "num_base_bdevs": 4, 00:11:43.196 "num_base_bdevs_discovered": 3, 00:11:43.196 "num_base_bdevs_operational": 3, 00:11:43.196 "base_bdevs_list": [ 00:11:43.196 { 00:11:43.196 "name": null, 00:11:43.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.196 "is_configured": false, 00:11:43.196 "data_offset": 0, 00:11:43.196 "data_size": 65536 00:11:43.196 }, 00:11:43.196 { 00:11:43.196 "name": "BaseBdev2", 00:11:43.196 "uuid": "d40b514d-f32b-5003-96cd-7ab6de016819", 00:11:43.196 "is_configured": true, 00:11:43.196 "data_offset": 0, 00:11:43.196 "data_size": 65536 00:11:43.196 }, 00:11:43.196 { 00:11:43.196 "name": "BaseBdev3", 00:11:43.196 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:43.196 "is_configured": true, 00:11:43.196 "data_offset": 0, 00:11:43.196 "data_size": 65536 00:11:43.196 }, 00:11:43.196 { 00:11:43.196 "name": "BaseBdev4", 00:11:43.196 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:43.196 "is_configured": true, 00:11:43.196 "data_offset": 0, 00:11:43.196 "data_size": 65536 00:11:43.196 } 00:11:43.196 ] 00:11:43.196 }' 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.196 23:51:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.464 23:51:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:43.464 23:51:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.464 23:51:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.464 [2024-11-02 23:51:37.390191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:43.464 [2024-11-02 23:51:37.394225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:11:43.464 23:51:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.464 23:51:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:43.464 [2024-11-02 23:51:37.396202] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.412 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.412 "name": "raid_bdev1", 00:11:44.412 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:44.412 "strip_size_kb": 0, 00:11:44.412 "state": "online", 00:11:44.412 "raid_level": "raid1", 00:11:44.412 "superblock": false, 00:11:44.412 "num_base_bdevs": 4, 00:11:44.412 "num_base_bdevs_discovered": 4, 00:11:44.412 "num_base_bdevs_operational": 4, 00:11:44.412 "process": { 00:11:44.412 "type": "rebuild", 00:11:44.413 "target": "spare", 00:11:44.413 "progress": { 00:11:44.413 "blocks": 20480, 00:11:44.413 "percent": 31 00:11:44.413 } 00:11:44.413 }, 00:11:44.413 "base_bdevs_list": [ 00:11:44.413 { 00:11:44.413 "name": "spare", 00:11:44.413 "uuid": "d14d36f1-d256-5915-8715-12d04b0e01e7", 00:11:44.413 "is_configured": true, 00:11:44.413 "data_offset": 0, 00:11:44.413 "data_size": 65536 00:11:44.413 }, 00:11:44.413 { 00:11:44.413 "name": "BaseBdev2", 00:11:44.413 "uuid": "d40b514d-f32b-5003-96cd-7ab6de016819", 00:11:44.413 "is_configured": true, 00:11:44.413 "data_offset": 0, 00:11:44.413 "data_size": 65536 00:11:44.413 }, 00:11:44.413 { 00:11:44.413 "name": "BaseBdev3", 00:11:44.413 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:44.413 "is_configured": true, 00:11:44.413 "data_offset": 0, 00:11:44.413 "data_size": 65536 00:11:44.413 }, 00:11:44.413 { 00:11:44.413 "name": "BaseBdev4", 00:11:44.413 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:44.413 "is_configured": true, 00:11:44.413 "data_offset": 0, 00:11:44.413 "data_size": 65536 00:11:44.413 } 00:11:44.413 ] 00:11:44.413 }' 00:11:44.413 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.413 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:44.413 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.673 [2024-11-02 23:51:38.536961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:44.673 [2024-11-02 23:51:38.600877] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:44.673 [2024-11-02 23:51:38.600930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.673 [2024-11-02 23:51:38.600949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:44.673 [2024-11-02 23:51:38.600958] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.673 "name": "raid_bdev1", 00:11:44.673 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:44.673 "strip_size_kb": 0, 00:11:44.673 "state": "online", 00:11:44.673 "raid_level": "raid1", 00:11:44.673 "superblock": false, 00:11:44.673 "num_base_bdevs": 4, 00:11:44.673 "num_base_bdevs_discovered": 3, 00:11:44.673 "num_base_bdevs_operational": 3, 00:11:44.673 "base_bdevs_list": [ 00:11:44.673 { 00:11:44.673 "name": null, 00:11:44.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.673 "is_configured": false, 00:11:44.673 "data_offset": 0, 00:11:44.673 "data_size": 65536 00:11:44.673 }, 00:11:44.673 { 00:11:44.673 "name": "BaseBdev2", 00:11:44.673 "uuid": "d40b514d-f32b-5003-96cd-7ab6de016819", 00:11:44.673 "is_configured": true, 00:11:44.673 "data_offset": 0, 00:11:44.673 "data_size": 65536 00:11:44.673 }, 00:11:44.673 { 00:11:44.673 "name": "BaseBdev3", 00:11:44.673 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:44.673 "is_configured": true, 00:11:44.673 "data_offset": 0, 00:11:44.673 "data_size": 65536 00:11:44.673 }, 00:11:44.673 { 00:11:44.673 "name": "BaseBdev4", 00:11:44.673 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:44.673 "is_configured": true, 00:11:44.673 "data_offset": 0, 00:11:44.673 "data_size": 65536 00:11:44.673 } 00:11:44.673 ] 00:11:44.673 }' 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.673 23:51:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.242 "name": "raid_bdev1", 00:11:45.242 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:45.242 "strip_size_kb": 0, 00:11:45.242 "state": "online", 00:11:45.242 "raid_level": "raid1", 00:11:45.242 "superblock": false, 00:11:45.242 "num_base_bdevs": 4, 00:11:45.242 "num_base_bdevs_discovered": 3, 00:11:45.242 "num_base_bdevs_operational": 3, 00:11:45.242 "base_bdevs_list": [ 00:11:45.242 { 00:11:45.242 "name": null, 00:11:45.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.242 "is_configured": false, 00:11:45.242 "data_offset": 0, 00:11:45.242 "data_size": 65536 00:11:45.242 }, 00:11:45.242 { 00:11:45.242 "name": "BaseBdev2", 00:11:45.242 "uuid": "d40b514d-f32b-5003-96cd-7ab6de016819", 00:11:45.242 "is_configured": true, 00:11:45.242 "data_offset": 0, 00:11:45.242 "data_size": 65536 00:11:45.242 }, 00:11:45.242 { 00:11:45.242 "name": "BaseBdev3", 00:11:45.242 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:45.242 "is_configured": true, 00:11:45.242 "data_offset": 0, 00:11:45.242 "data_size": 65536 00:11:45.242 }, 00:11:45.242 { 00:11:45.242 "name": "BaseBdev4", 00:11:45.242 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:45.242 "is_configured": true, 00:11:45.242 "data_offset": 0, 00:11:45.242 "data_size": 65536 00:11:45.242 } 00:11:45.242 ] 00:11:45.242 }' 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.242 [2024-11-02 23:51:39.220461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:45.242 [2024-11-02 23:51:39.224477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.242 23:51:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:45.242 [2024-11-02 23:51:39.226402] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:46.179 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.179 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.179 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.179 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.179 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.179 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.179 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.179 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.179 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.179 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.448 "name": "raid_bdev1", 00:11:46.448 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:46.448 "strip_size_kb": 0, 00:11:46.448 "state": "online", 00:11:46.448 "raid_level": "raid1", 00:11:46.448 "superblock": false, 00:11:46.448 "num_base_bdevs": 4, 00:11:46.448 "num_base_bdevs_discovered": 4, 00:11:46.448 "num_base_bdevs_operational": 4, 00:11:46.448 "process": { 00:11:46.448 "type": "rebuild", 00:11:46.448 "target": "spare", 00:11:46.448 "progress": { 00:11:46.448 "blocks": 20480, 00:11:46.448 "percent": 31 00:11:46.448 } 00:11:46.448 }, 00:11:46.448 "base_bdevs_list": [ 00:11:46.448 { 00:11:46.448 "name": "spare", 00:11:46.448 "uuid": "d14d36f1-d256-5915-8715-12d04b0e01e7", 00:11:46.448 "is_configured": true, 00:11:46.448 "data_offset": 0, 00:11:46.448 "data_size": 65536 00:11:46.448 }, 00:11:46.448 { 00:11:46.448 "name": "BaseBdev2", 00:11:46.448 "uuid": "d40b514d-f32b-5003-96cd-7ab6de016819", 00:11:46.448 "is_configured": true, 00:11:46.448 "data_offset": 0, 00:11:46.448 "data_size": 65536 00:11:46.448 }, 00:11:46.448 { 00:11:46.448 "name": "BaseBdev3", 00:11:46.448 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:46.448 "is_configured": true, 00:11:46.448 "data_offset": 0, 00:11:46.448 "data_size": 65536 00:11:46.448 }, 00:11:46.448 { 00:11:46.448 "name": "BaseBdev4", 00:11:46.448 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:46.448 "is_configured": true, 00:11:46.448 "data_offset": 0, 00:11:46.448 "data_size": 65536 00:11:46.448 } 00:11:46.448 ] 00:11:46.448 }' 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.448 [2024-11-02 23:51:40.395234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:46.448 [2024-11-02 23:51:40.430426] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.448 "name": "raid_bdev1", 00:11:46.448 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:46.448 "strip_size_kb": 0, 00:11:46.448 "state": "online", 00:11:46.448 "raid_level": "raid1", 00:11:46.448 "superblock": false, 00:11:46.448 "num_base_bdevs": 4, 00:11:46.448 "num_base_bdevs_discovered": 3, 00:11:46.448 "num_base_bdevs_operational": 3, 00:11:46.448 "process": { 00:11:46.448 "type": "rebuild", 00:11:46.448 "target": "spare", 00:11:46.448 "progress": { 00:11:46.448 "blocks": 24576, 00:11:46.448 "percent": 37 00:11:46.448 } 00:11:46.448 }, 00:11:46.448 "base_bdevs_list": [ 00:11:46.448 { 00:11:46.448 "name": "spare", 00:11:46.448 "uuid": "d14d36f1-d256-5915-8715-12d04b0e01e7", 00:11:46.448 "is_configured": true, 00:11:46.448 "data_offset": 0, 00:11:46.448 "data_size": 65536 00:11:46.448 }, 00:11:46.448 { 00:11:46.448 "name": null, 00:11:46.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.448 "is_configured": false, 00:11:46.448 "data_offset": 0, 00:11:46.448 "data_size": 65536 00:11:46.448 }, 00:11:46.448 { 00:11:46.448 "name": "BaseBdev3", 00:11:46.448 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:46.448 "is_configured": true, 00:11:46.448 "data_offset": 0, 00:11:46.448 "data_size": 65536 00:11:46.448 }, 00:11:46.448 { 00:11:46.448 "name": "BaseBdev4", 00:11:46.448 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:46.448 "is_configured": true, 00:11:46.448 "data_offset": 0, 00:11:46.448 "data_size": 65536 00:11:46.448 } 00:11:46.448 ] 00:11:46.448 }' 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.448 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=359 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.722 "name": "raid_bdev1", 00:11:46.722 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:46.722 "strip_size_kb": 0, 00:11:46.722 "state": "online", 00:11:46.722 "raid_level": "raid1", 00:11:46.722 "superblock": false, 00:11:46.722 "num_base_bdevs": 4, 00:11:46.722 "num_base_bdevs_discovered": 3, 00:11:46.722 "num_base_bdevs_operational": 3, 00:11:46.722 "process": { 00:11:46.722 "type": "rebuild", 00:11:46.722 "target": "spare", 00:11:46.722 "progress": { 00:11:46.722 "blocks": 26624, 00:11:46.722 "percent": 40 00:11:46.722 } 00:11:46.722 }, 00:11:46.722 "base_bdevs_list": [ 00:11:46.722 { 00:11:46.722 "name": "spare", 00:11:46.722 "uuid": "d14d36f1-d256-5915-8715-12d04b0e01e7", 00:11:46.722 "is_configured": true, 00:11:46.722 "data_offset": 0, 00:11:46.722 "data_size": 65536 00:11:46.722 }, 00:11:46.722 { 00:11:46.722 "name": null, 00:11:46.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.722 "is_configured": false, 00:11:46.722 "data_offset": 0, 00:11:46.722 "data_size": 65536 00:11:46.722 }, 00:11:46.722 { 00:11:46.722 "name": "BaseBdev3", 00:11:46.722 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:46.722 "is_configured": true, 00:11:46.722 "data_offset": 0, 00:11:46.722 "data_size": 65536 00:11:46.722 }, 00:11:46.722 { 00:11:46.722 "name": "BaseBdev4", 00:11:46.722 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:46.722 "is_configured": true, 00:11:46.722 "data_offset": 0, 00:11:46.722 "data_size": 65536 00:11:46.722 } 00:11:46.722 ] 00:11:46.722 }' 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.722 23:51:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.660 "name": "raid_bdev1", 00:11:47.660 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:47.660 "strip_size_kb": 0, 00:11:47.660 "state": "online", 00:11:47.660 "raid_level": "raid1", 00:11:47.660 "superblock": false, 00:11:47.660 "num_base_bdevs": 4, 00:11:47.660 "num_base_bdevs_discovered": 3, 00:11:47.660 "num_base_bdevs_operational": 3, 00:11:47.660 "process": { 00:11:47.660 "type": "rebuild", 00:11:47.660 "target": "spare", 00:11:47.660 "progress": { 00:11:47.660 "blocks": 49152, 00:11:47.660 "percent": 75 00:11:47.660 } 00:11:47.660 }, 00:11:47.660 "base_bdevs_list": [ 00:11:47.660 { 00:11:47.660 "name": "spare", 00:11:47.660 "uuid": "d14d36f1-d256-5915-8715-12d04b0e01e7", 00:11:47.660 "is_configured": true, 00:11:47.660 "data_offset": 0, 00:11:47.660 "data_size": 65536 00:11:47.660 }, 00:11:47.660 { 00:11:47.660 "name": null, 00:11:47.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.660 "is_configured": false, 00:11:47.660 "data_offset": 0, 00:11:47.660 "data_size": 65536 00:11:47.660 }, 00:11:47.660 { 00:11:47.660 "name": "BaseBdev3", 00:11:47.660 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:47.660 "is_configured": true, 00:11:47.660 "data_offset": 0, 00:11:47.660 "data_size": 65536 00:11:47.660 }, 00:11:47.660 { 00:11:47.660 "name": "BaseBdev4", 00:11:47.660 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:47.660 "is_configured": true, 00:11:47.660 "data_offset": 0, 00:11:47.660 "data_size": 65536 00:11:47.660 } 00:11:47.660 ] 00:11:47.660 }' 00:11:47.660 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.920 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:47.920 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.920 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:47.920 23:51:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:48.496 [2024-11-02 23:51:42.437499] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:48.496 [2024-11-02 23:51:42.437619] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:48.496 [2024-11-02 23:51:42.437661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.761 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:48.761 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.761 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.761 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.761 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.761 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.761 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.761 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.761 23:51:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.761 23:51:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.019 "name": "raid_bdev1", 00:11:49.019 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:49.019 "strip_size_kb": 0, 00:11:49.019 "state": "online", 00:11:49.019 "raid_level": "raid1", 00:11:49.019 "superblock": false, 00:11:49.019 "num_base_bdevs": 4, 00:11:49.019 "num_base_bdevs_discovered": 3, 00:11:49.019 "num_base_bdevs_operational": 3, 00:11:49.019 "base_bdevs_list": [ 00:11:49.019 { 00:11:49.019 "name": "spare", 00:11:49.019 "uuid": "d14d36f1-d256-5915-8715-12d04b0e01e7", 00:11:49.019 "is_configured": true, 00:11:49.019 "data_offset": 0, 00:11:49.019 "data_size": 65536 00:11:49.019 }, 00:11:49.019 { 00:11:49.019 "name": null, 00:11:49.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.019 "is_configured": false, 00:11:49.019 "data_offset": 0, 00:11:49.019 "data_size": 65536 00:11:49.019 }, 00:11:49.019 { 00:11:49.019 "name": "BaseBdev3", 00:11:49.019 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:49.019 "is_configured": true, 00:11:49.019 "data_offset": 0, 00:11:49.019 "data_size": 65536 00:11:49.019 }, 00:11:49.019 { 00:11:49.019 "name": "BaseBdev4", 00:11:49.019 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:49.019 "is_configured": true, 00:11:49.019 "data_offset": 0, 00:11:49.019 "data_size": 65536 00:11:49.019 } 00:11:49.019 ] 00:11:49.019 }' 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.019 23:51:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.019 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.019 "name": "raid_bdev1", 00:11:49.019 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:49.019 "strip_size_kb": 0, 00:11:49.019 "state": "online", 00:11:49.019 "raid_level": "raid1", 00:11:49.019 "superblock": false, 00:11:49.019 "num_base_bdevs": 4, 00:11:49.019 "num_base_bdevs_discovered": 3, 00:11:49.019 "num_base_bdevs_operational": 3, 00:11:49.020 "base_bdevs_list": [ 00:11:49.020 { 00:11:49.020 "name": "spare", 00:11:49.020 "uuid": "d14d36f1-d256-5915-8715-12d04b0e01e7", 00:11:49.020 "is_configured": true, 00:11:49.020 "data_offset": 0, 00:11:49.020 "data_size": 65536 00:11:49.020 }, 00:11:49.020 { 00:11:49.020 "name": null, 00:11:49.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.020 "is_configured": false, 00:11:49.020 "data_offset": 0, 00:11:49.020 "data_size": 65536 00:11:49.020 }, 00:11:49.020 { 00:11:49.020 "name": "BaseBdev3", 00:11:49.020 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:49.020 "is_configured": true, 00:11:49.020 "data_offset": 0, 00:11:49.020 "data_size": 65536 00:11:49.020 }, 00:11:49.020 { 00:11:49.020 "name": "BaseBdev4", 00:11:49.020 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:49.020 "is_configured": true, 00:11:49.020 "data_offset": 0, 00:11:49.020 "data_size": 65536 00:11:49.020 } 00:11:49.020 ] 00:11:49.020 }' 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.020 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.279 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.279 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.279 "name": "raid_bdev1", 00:11:49.279 "uuid": "1e42277d-7906-4837-9996-9e0aca672ba3", 00:11:49.279 "strip_size_kb": 0, 00:11:49.279 "state": "online", 00:11:49.279 "raid_level": "raid1", 00:11:49.279 "superblock": false, 00:11:49.279 "num_base_bdevs": 4, 00:11:49.279 "num_base_bdevs_discovered": 3, 00:11:49.279 "num_base_bdevs_operational": 3, 00:11:49.279 "base_bdevs_list": [ 00:11:49.279 { 00:11:49.279 "name": "spare", 00:11:49.279 "uuid": "d14d36f1-d256-5915-8715-12d04b0e01e7", 00:11:49.279 "is_configured": true, 00:11:49.279 "data_offset": 0, 00:11:49.279 "data_size": 65536 00:11:49.279 }, 00:11:49.279 { 00:11:49.279 "name": null, 00:11:49.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.279 "is_configured": false, 00:11:49.279 "data_offset": 0, 00:11:49.279 "data_size": 65536 00:11:49.279 }, 00:11:49.279 { 00:11:49.279 "name": "BaseBdev3", 00:11:49.279 "uuid": "72d6294f-c544-5864-94df-d99432a06f08", 00:11:49.279 "is_configured": true, 00:11:49.279 "data_offset": 0, 00:11:49.279 "data_size": 65536 00:11:49.279 }, 00:11:49.279 { 00:11:49.279 "name": "BaseBdev4", 00:11:49.279 "uuid": "a643fa91-a01e-5af0-906e-a98512c9e9f3", 00:11:49.279 "is_configured": true, 00:11:49.279 "data_offset": 0, 00:11:49.279 "data_size": 65536 00:11:49.279 } 00:11:49.279 ] 00:11:49.279 }' 00:11:49.279 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.279 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.537 [2024-11-02 23:51:43.484077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.537 [2024-11-02 23:51:43.484110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.537 [2024-11-02 23:51:43.484196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.537 [2024-11-02 23:51:43.484274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.537 [2024-11-02 23:51:43.484300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:49.537 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:49.813 /dev/nbd0 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.813 1+0 records in 00:11:49.813 1+0 records out 00:11:49.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355006 s, 11.5 MB/s 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:49.813 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:50.073 /dev/nbd1 00:11:50.073 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:50.073 23:51:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:50.073 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:11:50.073 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:11:50.073 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:50.073 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:50.073 23:51:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:50.073 1+0 records in 00:11:50.073 1+0 records out 00:11:50.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295716 s, 13.9 MB/s 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:50.073 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:50.074 23:51:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:50.074 23:51:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:50.074 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:50.074 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:50.074 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:50.074 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:50.074 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.074 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:50.332 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:50.332 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:50.332 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:50.332 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.332 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.332 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:50.332 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:50.332 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.332 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.332 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87995 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 87995 ']' 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 87995 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87995 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:50.590 killing process with pid 87995 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87995' 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 87995 00:11:50.590 Received shutdown signal, test time was about 60.000000 seconds 00:11:50.590 00:11:50.590 Latency(us) 00:11:50.590 [2024-11-02T23:51:44.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.590 [2024-11-02T23:51:44.685Z] =================================================================================================================== 00:11:50.590 [2024-11-02T23:51:44.685Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:50.590 [2024-11-02 23:51:44.581023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.590 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 87995 00:11:50.590 [2024-11-02 23:51:44.631688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.850 23:51:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:50.850 00:11:50.850 real 0m14.951s 00:11:50.850 user 0m17.217s 00:11:50.850 sys 0m2.809s 00:11:50.850 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:50.850 23:51:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.850 ************************************ 00:11:50.850 END TEST raid_rebuild_test 00:11:50.850 ************************************ 00:11:50.850 23:51:44 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:11:50.850 23:51:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:11:50.850 23:51:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:50.850 23:51:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.850 ************************************ 00:11:50.850 START TEST raid_rebuild_test_sb 00:11:50.850 ************************************ 00:11:50.850 23:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88420 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88420 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 88420 ']' 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:50.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:50.851 23:51:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.109 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:51.109 Zero copy mechanism will not be used. 00:11:51.109 [2024-11-02 23:51:45.010596] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:11:51.109 [2024-11-02 23:51:45.010735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88420 ] 00:11:51.109 [2024-11-02 23:51:45.143000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.109 [2024-11-02 23:51:45.167901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.367 [2024-11-02 23:51:45.210114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.367 [2024-11-02 23:51:45.210157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 BaseBdev1_malloc 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 [2024-11-02 23:51:45.863627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:51.935 [2024-11-02 23:51:45.863680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.935 [2024-11-02 23:51:45.863703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:51.935 [2024-11-02 23:51:45.863717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.935 [2024-11-02 23:51:45.865811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.935 [2024-11-02 23:51:45.865847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.935 BaseBdev1 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 BaseBdev2_malloc 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 [2024-11-02 23:51:45.892134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:51.935 [2024-11-02 23:51:45.892182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.935 [2024-11-02 23:51:45.892202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:51.935 [2024-11-02 23:51:45.892211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.935 [2024-11-02 23:51:45.894201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.935 [2024-11-02 23:51:45.894239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.935 BaseBdev2 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 BaseBdev3_malloc 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.935 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 [2024-11-02 23:51:45.920507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:51.935 [2024-11-02 23:51:45.920555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.935 [2024-11-02 23:51:45.920578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:51.936 [2024-11-02 23:51:45.920587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.936 [2024-11-02 23:51:45.922563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.936 [2024-11-02 23:51:45.922597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:51.936 BaseBdev3 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.936 BaseBdev4_malloc 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.936 [2024-11-02 23:51:45.968062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:51.936 [2024-11-02 23:51:45.968159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.936 [2024-11-02 23:51:45.968207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:51.936 [2024-11-02 23:51:45.968228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.936 [2024-11-02 23:51:45.972417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.936 [2024-11-02 23:51:45.972467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:51.936 BaseBdev4 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.936 spare_malloc 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.936 23:51:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.936 spare_delay 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.936 [2024-11-02 23:51:46.009734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:51.936 [2024-11-02 23:51:46.009784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.936 [2024-11-02 23:51:46.009802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:51.936 [2024-11-02 23:51:46.009810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.936 [2024-11-02 23:51:46.011933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.936 [2024-11-02 23:51:46.011968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:51.936 spare 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.936 [2024-11-02 23:51:46.021805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.936 [2024-11-02 23:51:46.023679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.936 [2024-11-02 23:51:46.023753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.936 [2024-11-02 23:51:46.023802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:51.936 [2024-11-02 23:51:46.023976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:51.936 [2024-11-02 23:51:46.024003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.936 [2024-11-02 23:51:46.024242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:51.936 [2024-11-02 23:51:46.024376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:51.936 [2024-11-02 23:51:46.024394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:51.936 [2024-11-02 23:51:46.024509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.936 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.195 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.195 "name": "raid_bdev1", 00:11:52.195 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:11:52.195 "strip_size_kb": 0, 00:11:52.195 "state": "online", 00:11:52.195 "raid_level": "raid1", 00:11:52.195 "superblock": true, 00:11:52.195 "num_base_bdevs": 4, 00:11:52.195 "num_base_bdevs_discovered": 4, 00:11:52.195 "num_base_bdevs_operational": 4, 00:11:52.195 "base_bdevs_list": [ 00:11:52.195 { 00:11:52.195 "name": "BaseBdev1", 00:11:52.195 "uuid": "591bf915-531b-544d-b945-8f58e5f760de", 00:11:52.195 "is_configured": true, 00:11:52.195 "data_offset": 2048, 00:11:52.195 "data_size": 63488 00:11:52.195 }, 00:11:52.195 { 00:11:52.195 "name": "BaseBdev2", 00:11:52.195 "uuid": "fdf975d1-b71a-5d7a-8cf9-a4df5d798773", 00:11:52.195 "is_configured": true, 00:11:52.195 "data_offset": 2048, 00:11:52.195 "data_size": 63488 00:11:52.195 }, 00:11:52.195 { 00:11:52.195 "name": "BaseBdev3", 00:11:52.195 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:11:52.195 "is_configured": true, 00:11:52.195 "data_offset": 2048, 00:11:52.195 "data_size": 63488 00:11:52.196 }, 00:11:52.196 { 00:11:52.196 "name": "BaseBdev4", 00:11:52.196 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:11:52.196 "is_configured": true, 00:11:52.196 "data_offset": 2048, 00:11:52.196 "data_size": 63488 00:11:52.196 } 00:11:52.196 ] 00:11:52.196 }' 00:11:52.196 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.196 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.456 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.456 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.456 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:52.456 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.457 [2024-11-02 23:51:46.437387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.457 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:52.716 [2024-11-02 23:51:46.712649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:52.716 /dev/nbd0 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.716 1+0 records in 00:11:52.716 1+0 records out 00:11:52.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425172 s, 9.6 MB/s 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:52.716 23:51:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:58.008 63488+0 records in 00:11:58.008 63488+0 records out 00:11:58.008 32505856 bytes (33 MB, 31 MiB) copied, 4.94531 s, 6.6 MB/s 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:58.008 [2024-11-02 23:51:51.918826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.008 [2024-11-02 23:51:51.951537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.008 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.009 "name": "raid_bdev1", 00:11:58.009 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:11:58.009 "strip_size_kb": 0, 00:11:58.009 "state": "online", 00:11:58.009 "raid_level": "raid1", 00:11:58.009 "superblock": true, 00:11:58.009 "num_base_bdevs": 4, 00:11:58.009 "num_base_bdevs_discovered": 3, 00:11:58.009 "num_base_bdevs_operational": 3, 00:11:58.009 "base_bdevs_list": [ 00:11:58.009 { 00:11:58.009 "name": null, 00:11:58.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.009 "is_configured": false, 00:11:58.009 "data_offset": 0, 00:11:58.009 "data_size": 63488 00:11:58.009 }, 00:11:58.009 { 00:11:58.009 "name": "BaseBdev2", 00:11:58.009 "uuid": "fdf975d1-b71a-5d7a-8cf9-a4df5d798773", 00:11:58.009 "is_configured": true, 00:11:58.009 "data_offset": 2048, 00:11:58.009 "data_size": 63488 00:11:58.009 }, 00:11:58.009 { 00:11:58.009 "name": "BaseBdev3", 00:11:58.009 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:11:58.009 "is_configured": true, 00:11:58.009 "data_offset": 2048, 00:11:58.009 "data_size": 63488 00:11:58.009 }, 00:11:58.009 { 00:11:58.009 "name": "BaseBdev4", 00:11:58.009 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:11:58.009 "is_configured": true, 00:11:58.009 "data_offset": 2048, 00:11:58.009 "data_size": 63488 00:11:58.009 } 00:11:58.009 ] 00:11:58.009 }' 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.009 23:51:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.597 23:51:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:58.597 23:51:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.597 23:51:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.597 [2024-11-02 23:51:52.430792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:58.597 [2024-11-02 23:51:52.434987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:11:58.597 23:51:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.597 23:51:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:58.597 [2024-11-02 23:51:52.436871] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.542 "name": "raid_bdev1", 00:11:59.542 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:11:59.542 "strip_size_kb": 0, 00:11:59.542 "state": "online", 00:11:59.542 "raid_level": "raid1", 00:11:59.542 "superblock": true, 00:11:59.542 "num_base_bdevs": 4, 00:11:59.542 "num_base_bdevs_discovered": 4, 00:11:59.542 "num_base_bdevs_operational": 4, 00:11:59.542 "process": { 00:11:59.542 "type": "rebuild", 00:11:59.542 "target": "spare", 00:11:59.542 "progress": { 00:11:59.542 "blocks": 20480, 00:11:59.542 "percent": 32 00:11:59.542 } 00:11:59.542 }, 00:11:59.542 "base_bdevs_list": [ 00:11:59.542 { 00:11:59.542 "name": "spare", 00:11:59.542 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:11:59.542 "is_configured": true, 00:11:59.542 "data_offset": 2048, 00:11:59.542 "data_size": 63488 00:11:59.542 }, 00:11:59.542 { 00:11:59.542 "name": "BaseBdev2", 00:11:59.542 "uuid": "fdf975d1-b71a-5d7a-8cf9-a4df5d798773", 00:11:59.542 "is_configured": true, 00:11:59.542 "data_offset": 2048, 00:11:59.542 "data_size": 63488 00:11:59.542 }, 00:11:59.542 { 00:11:59.542 "name": "BaseBdev3", 00:11:59.542 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:11:59.542 "is_configured": true, 00:11:59.542 "data_offset": 2048, 00:11:59.542 "data_size": 63488 00:11:59.542 }, 00:11:59.542 { 00:11:59.542 "name": "BaseBdev4", 00:11:59.542 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:11:59.542 "is_configured": true, 00:11:59.542 "data_offset": 2048, 00:11:59.542 "data_size": 63488 00:11:59.542 } 00:11:59.542 ] 00:11:59.542 }' 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.542 23:51:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.542 [2024-11-02 23:51:53.589593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:59.802 [2024-11-02 23:51:53.641470] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:59.802 [2024-11-02 23:51:53.641521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.802 [2024-11-02 23:51:53.641553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:59.802 [2024-11-02 23:51:53.641569] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.802 "name": "raid_bdev1", 00:11:59.802 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:11:59.802 "strip_size_kb": 0, 00:11:59.802 "state": "online", 00:11:59.802 "raid_level": "raid1", 00:11:59.802 "superblock": true, 00:11:59.802 "num_base_bdevs": 4, 00:11:59.802 "num_base_bdevs_discovered": 3, 00:11:59.802 "num_base_bdevs_operational": 3, 00:11:59.802 "base_bdevs_list": [ 00:11:59.802 { 00:11:59.802 "name": null, 00:11:59.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.802 "is_configured": false, 00:11:59.802 "data_offset": 0, 00:11:59.802 "data_size": 63488 00:11:59.802 }, 00:11:59.802 { 00:11:59.802 "name": "BaseBdev2", 00:11:59.802 "uuid": "fdf975d1-b71a-5d7a-8cf9-a4df5d798773", 00:11:59.802 "is_configured": true, 00:11:59.802 "data_offset": 2048, 00:11:59.802 "data_size": 63488 00:11:59.802 }, 00:11:59.802 { 00:11:59.802 "name": "BaseBdev3", 00:11:59.802 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:11:59.802 "is_configured": true, 00:11:59.802 "data_offset": 2048, 00:11:59.802 "data_size": 63488 00:11:59.802 }, 00:11:59.802 { 00:11:59.802 "name": "BaseBdev4", 00:11:59.802 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:11:59.802 "is_configured": true, 00:11:59.802 "data_offset": 2048, 00:11:59.802 "data_size": 63488 00:11:59.802 } 00:11:59.802 ] 00:11:59.802 }' 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.802 23:51:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.062 "name": "raid_bdev1", 00:12:00.062 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:00.062 "strip_size_kb": 0, 00:12:00.062 "state": "online", 00:12:00.062 "raid_level": "raid1", 00:12:00.062 "superblock": true, 00:12:00.062 "num_base_bdevs": 4, 00:12:00.062 "num_base_bdevs_discovered": 3, 00:12:00.062 "num_base_bdevs_operational": 3, 00:12:00.062 "base_bdevs_list": [ 00:12:00.062 { 00:12:00.062 "name": null, 00:12:00.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.062 "is_configured": false, 00:12:00.062 "data_offset": 0, 00:12:00.062 "data_size": 63488 00:12:00.062 }, 00:12:00.062 { 00:12:00.062 "name": "BaseBdev2", 00:12:00.062 "uuid": "fdf975d1-b71a-5d7a-8cf9-a4df5d798773", 00:12:00.062 "is_configured": true, 00:12:00.062 "data_offset": 2048, 00:12:00.062 "data_size": 63488 00:12:00.062 }, 00:12:00.062 { 00:12:00.062 "name": "BaseBdev3", 00:12:00.062 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:00.062 "is_configured": true, 00:12:00.062 "data_offset": 2048, 00:12:00.062 "data_size": 63488 00:12:00.062 }, 00:12:00.062 { 00:12:00.062 "name": "BaseBdev4", 00:12:00.062 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:00.062 "is_configured": true, 00:12:00.062 "data_offset": 2048, 00:12:00.062 "data_size": 63488 00:12:00.062 } 00:12:00.062 ] 00:12:00.062 }' 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.062 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:00.322 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.322 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:00.322 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:00.322 23:51:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.322 23:51:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.322 [2024-11-02 23:51:54.209100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:00.322 [2024-11-02 23:51:54.213131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:12:00.322 23:51:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.322 23:51:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:00.322 [2024-11-02 23:51:54.215033] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.264 "name": "raid_bdev1", 00:12:01.264 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:01.264 "strip_size_kb": 0, 00:12:01.264 "state": "online", 00:12:01.264 "raid_level": "raid1", 00:12:01.264 "superblock": true, 00:12:01.264 "num_base_bdevs": 4, 00:12:01.264 "num_base_bdevs_discovered": 4, 00:12:01.264 "num_base_bdevs_operational": 4, 00:12:01.264 "process": { 00:12:01.264 "type": "rebuild", 00:12:01.264 "target": "spare", 00:12:01.264 "progress": { 00:12:01.264 "blocks": 20480, 00:12:01.264 "percent": 32 00:12:01.264 } 00:12:01.264 }, 00:12:01.264 "base_bdevs_list": [ 00:12:01.264 { 00:12:01.264 "name": "spare", 00:12:01.264 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:01.264 "is_configured": true, 00:12:01.264 "data_offset": 2048, 00:12:01.264 "data_size": 63488 00:12:01.264 }, 00:12:01.264 { 00:12:01.264 "name": "BaseBdev2", 00:12:01.264 "uuid": "fdf975d1-b71a-5d7a-8cf9-a4df5d798773", 00:12:01.264 "is_configured": true, 00:12:01.264 "data_offset": 2048, 00:12:01.264 "data_size": 63488 00:12:01.264 }, 00:12:01.264 { 00:12:01.264 "name": "BaseBdev3", 00:12:01.264 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:01.264 "is_configured": true, 00:12:01.264 "data_offset": 2048, 00:12:01.264 "data_size": 63488 00:12:01.264 }, 00:12:01.264 { 00:12:01.264 "name": "BaseBdev4", 00:12:01.264 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:01.264 "is_configured": true, 00:12:01.264 "data_offset": 2048, 00:12:01.264 "data_size": 63488 00:12:01.264 } 00:12:01.264 ] 00:12:01.264 }' 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:01.264 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.264 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.264 [2024-11-02 23:51:55.343542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.524 [2024-11-02 23:51:55.519062] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.524 "name": "raid_bdev1", 00:12:01.524 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:01.524 "strip_size_kb": 0, 00:12:01.524 "state": "online", 00:12:01.524 "raid_level": "raid1", 00:12:01.524 "superblock": true, 00:12:01.524 "num_base_bdevs": 4, 00:12:01.524 "num_base_bdevs_discovered": 3, 00:12:01.524 "num_base_bdevs_operational": 3, 00:12:01.524 "process": { 00:12:01.524 "type": "rebuild", 00:12:01.524 "target": "spare", 00:12:01.524 "progress": { 00:12:01.524 "blocks": 24576, 00:12:01.524 "percent": 38 00:12:01.524 } 00:12:01.524 }, 00:12:01.524 "base_bdevs_list": [ 00:12:01.524 { 00:12:01.524 "name": "spare", 00:12:01.524 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:01.524 "is_configured": true, 00:12:01.524 "data_offset": 2048, 00:12:01.524 "data_size": 63488 00:12:01.524 }, 00:12:01.524 { 00:12:01.524 "name": null, 00:12:01.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.524 "is_configured": false, 00:12:01.524 "data_offset": 0, 00:12:01.524 "data_size": 63488 00:12:01.524 }, 00:12:01.524 { 00:12:01.524 "name": "BaseBdev3", 00:12:01.524 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:01.524 "is_configured": true, 00:12:01.524 "data_offset": 2048, 00:12:01.524 "data_size": 63488 00:12:01.524 }, 00:12:01.524 { 00:12:01.524 "name": "BaseBdev4", 00:12:01.524 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:01.524 "is_configured": true, 00:12:01.524 "data_offset": 2048, 00:12:01.524 "data_size": 63488 00:12:01.524 } 00:12:01.524 ] 00:12:01.524 }' 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.524 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=374 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.784 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.784 "name": "raid_bdev1", 00:12:01.784 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:01.784 "strip_size_kb": 0, 00:12:01.784 "state": "online", 00:12:01.784 "raid_level": "raid1", 00:12:01.784 "superblock": true, 00:12:01.784 "num_base_bdevs": 4, 00:12:01.784 "num_base_bdevs_discovered": 3, 00:12:01.784 "num_base_bdevs_operational": 3, 00:12:01.784 "process": { 00:12:01.784 "type": "rebuild", 00:12:01.784 "target": "spare", 00:12:01.784 "progress": { 00:12:01.784 "blocks": 26624, 00:12:01.784 "percent": 41 00:12:01.784 } 00:12:01.784 }, 00:12:01.784 "base_bdevs_list": [ 00:12:01.784 { 00:12:01.784 "name": "spare", 00:12:01.784 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:01.784 "is_configured": true, 00:12:01.784 "data_offset": 2048, 00:12:01.784 "data_size": 63488 00:12:01.784 }, 00:12:01.784 { 00:12:01.784 "name": null, 00:12:01.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.784 "is_configured": false, 00:12:01.784 "data_offset": 0, 00:12:01.784 "data_size": 63488 00:12:01.784 }, 00:12:01.784 { 00:12:01.784 "name": "BaseBdev3", 00:12:01.784 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:01.785 "is_configured": true, 00:12:01.785 "data_offset": 2048, 00:12:01.785 "data_size": 63488 00:12:01.785 }, 00:12:01.785 { 00:12:01.785 "name": "BaseBdev4", 00:12:01.785 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:01.785 "is_configured": true, 00:12:01.785 "data_offset": 2048, 00:12:01.785 "data_size": 63488 00:12:01.785 } 00:12:01.785 ] 00:12:01.785 }' 00:12:01.785 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.785 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.785 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.785 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.785 23:51:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:02.723 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:02.723 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.723 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.723 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.723 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.723 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.723 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.723 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.723 23:51:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.723 23:51:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.982 23:51:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.982 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.982 "name": "raid_bdev1", 00:12:02.982 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:02.982 "strip_size_kb": 0, 00:12:02.982 "state": "online", 00:12:02.983 "raid_level": "raid1", 00:12:02.983 "superblock": true, 00:12:02.983 "num_base_bdevs": 4, 00:12:02.983 "num_base_bdevs_discovered": 3, 00:12:02.983 "num_base_bdevs_operational": 3, 00:12:02.983 "process": { 00:12:02.983 "type": "rebuild", 00:12:02.983 "target": "spare", 00:12:02.983 "progress": { 00:12:02.983 "blocks": 49152, 00:12:02.983 "percent": 77 00:12:02.983 } 00:12:02.983 }, 00:12:02.983 "base_bdevs_list": [ 00:12:02.983 { 00:12:02.983 "name": "spare", 00:12:02.983 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:02.983 "is_configured": true, 00:12:02.983 "data_offset": 2048, 00:12:02.983 "data_size": 63488 00:12:02.983 }, 00:12:02.983 { 00:12:02.983 "name": null, 00:12:02.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.983 "is_configured": false, 00:12:02.983 "data_offset": 0, 00:12:02.983 "data_size": 63488 00:12:02.983 }, 00:12:02.983 { 00:12:02.983 "name": "BaseBdev3", 00:12:02.983 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:02.983 "is_configured": true, 00:12:02.983 "data_offset": 2048, 00:12:02.983 "data_size": 63488 00:12:02.983 }, 00:12:02.983 { 00:12:02.983 "name": "BaseBdev4", 00:12:02.983 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:02.983 "is_configured": true, 00:12:02.983 "data_offset": 2048, 00:12:02.983 "data_size": 63488 00:12:02.983 } 00:12:02.983 ] 00:12:02.983 }' 00:12:02.983 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.983 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:02.983 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.983 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.983 23:51:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:03.552 [2024-11-02 23:51:57.425738] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:03.552 [2024-11-02 23:51:57.425814] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:03.552 [2024-11-02 23:51:57.425956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.121 "name": "raid_bdev1", 00:12:04.121 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:04.121 "strip_size_kb": 0, 00:12:04.121 "state": "online", 00:12:04.121 "raid_level": "raid1", 00:12:04.121 "superblock": true, 00:12:04.121 "num_base_bdevs": 4, 00:12:04.121 "num_base_bdevs_discovered": 3, 00:12:04.121 "num_base_bdevs_operational": 3, 00:12:04.121 "base_bdevs_list": [ 00:12:04.121 { 00:12:04.121 "name": "spare", 00:12:04.121 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:04.121 "is_configured": true, 00:12:04.121 "data_offset": 2048, 00:12:04.121 "data_size": 63488 00:12:04.121 }, 00:12:04.121 { 00:12:04.121 "name": null, 00:12:04.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.121 "is_configured": false, 00:12:04.121 "data_offset": 0, 00:12:04.121 "data_size": 63488 00:12:04.121 }, 00:12:04.121 { 00:12:04.121 "name": "BaseBdev3", 00:12:04.121 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:04.121 "is_configured": true, 00:12:04.121 "data_offset": 2048, 00:12:04.121 "data_size": 63488 00:12:04.121 }, 00:12:04.121 { 00:12:04.121 "name": "BaseBdev4", 00:12:04.121 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:04.121 "is_configured": true, 00:12:04.121 "data_offset": 2048, 00:12:04.121 "data_size": 63488 00:12:04.121 } 00:12:04.121 ] 00:12:04.121 }' 00:12:04.121 23:51:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.121 "name": "raid_bdev1", 00:12:04.121 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:04.121 "strip_size_kb": 0, 00:12:04.121 "state": "online", 00:12:04.121 "raid_level": "raid1", 00:12:04.121 "superblock": true, 00:12:04.121 "num_base_bdevs": 4, 00:12:04.121 "num_base_bdevs_discovered": 3, 00:12:04.121 "num_base_bdevs_operational": 3, 00:12:04.121 "base_bdevs_list": [ 00:12:04.121 { 00:12:04.121 "name": "spare", 00:12:04.121 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:04.121 "is_configured": true, 00:12:04.121 "data_offset": 2048, 00:12:04.121 "data_size": 63488 00:12:04.121 }, 00:12:04.121 { 00:12:04.121 "name": null, 00:12:04.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.121 "is_configured": false, 00:12:04.121 "data_offset": 0, 00:12:04.121 "data_size": 63488 00:12:04.121 }, 00:12:04.121 { 00:12:04.121 "name": "BaseBdev3", 00:12:04.121 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:04.121 "is_configured": true, 00:12:04.121 "data_offset": 2048, 00:12:04.121 "data_size": 63488 00:12:04.121 }, 00:12:04.121 { 00:12:04.121 "name": "BaseBdev4", 00:12:04.121 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:04.121 "is_configured": true, 00:12:04.121 "data_offset": 2048, 00:12:04.121 "data_size": 63488 00:12:04.121 } 00:12:04.121 ] 00:12:04.121 }' 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.121 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.122 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.382 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.382 "name": "raid_bdev1", 00:12:04.382 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:04.382 "strip_size_kb": 0, 00:12:04.382 "state": "online", 00:12:04.382 "raid_level": "raid1", 00:12:04.382 "superblock": true, 00:12:04.382 "num_base_bdevs": 4, 00:12:04.382 "num_base_bdevs_discovered": 3, 00:12:04.382 "num_base_bdevs_operational": 3, 00:12:04.382 "base_bdevs_list": [ 00:12:04.382 { 00:12:04.382 "name": "spare", 00:12:04.382 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:04.382 "is_configured": true, 00:12:04.382 "data_offset": 2048, 00:12:04.382 "data_size": 63488 00:12:04.382 }, 00:12:04.382 { 00:12:04.382 "name": null, 00:12:04.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.382 "is_configured": false, 00:12:04.382 "data_offset": 0, 00:12:04.382 "data_size": 63488 00:12:04.382 }, 00:12:04.382 { 00:12:04.382 "name": "BaseBdev3", 00:12:04.382 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:04.382 "is_configured": true, 00:12:04.382 "data_offset": 2048, 00:12:04.382 "data_size": 63488 00:12:04.382 }, 00:12:04.382 { 00:12:04.382 "name": "BaseBdev4", 00:12:04.382 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:04.382 "is_configured": true, 00:12:04.382 "data_offset": 2048, 00:12:04.382 "data_size": 63488 00:12:04.382 } 00:12:04.382 ] 00:12:04.382 }' 00:12:04.382 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.382 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.642 [2024-11-02 23:51:58.616216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.642 [2024-11-02 23:51:58.616247] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.642 [2024-11-02 23:51:58.616372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.642 [2024-11-02 23:51:58.616463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.642 [2024-11-02 23:51:58.616482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:04.642 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:04.902 /dev/nbd0 00:12:04.902 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:04.902 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:04.902 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:04.902 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:04.902 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:04.902 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:04.902 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:04.902 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:04.902 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:04.903 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:04.903 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.903 1+0 records in 00:12:04.903 1+0 records out 00:12:04.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375186 s, 10.9 MB/s 00:12:04.903 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.903 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:04.903 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.903 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:04.903 23:51:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:04.903 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.903 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:04.903 23:51:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:05.164 /dev/nbd1 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.165 1+0 records in 00:12:05.165 1+0 records out 00:12:05.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453871 s, 9.0 MB/s 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.165 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:05.426 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:05.426 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:05.426 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:05.426 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.426 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.426 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:05.426 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:05.426 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.427 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.427 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:05.689 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.690 [2024-11-02 23:51:59.749524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:05.690 [2024-11-02 23:51:59.749595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.690 [2024-11-02 23:51:59.749616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:05.690 [2024-11-02 23:51:59.749628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.690 [2024-11-02 23:51:59.751826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.690 [2024-11-02 23:51:59.751864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:05.690 [2024-11-02 23:51:59.751942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:05.690 [2024-11-02 23:51:59.751992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:05.690 [2024-11-02 23:51:59.752128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.690 [2024-11-02 23:51:59.752223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.690 spare 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.690 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.991 [2024-11-02 23:51:59.852104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:05.991 [2024-11-02 23:51:59.852134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:05.991 [2024-11-02 23:51:59.852421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:12:05.991 [2024-11-02 23:51:59.852594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:05.991 [2024-11-02 23:51:59.852611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:05.991 [2024-11-02 23:51:59.852753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.991 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.991 "name": "raid_bdev1", 00:12:05.992 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:05.992 "strip_size_kb": 0, 00:12:05.992 "state": "online", 00:12:05.992 "raid_level": "raid1", 00:12:05.992 "superblock": true, 00:12:05.992 "num_base_bdevs": 4, 00:12:05.992 "num_base_bdevs_discovered": 3, 00:12:05.992 "num_base_bdevs_operational": 3, 00:12:05.992 "base_bdevs_list": [ 00:12:05.992 { 00:12:05.992 "name": "spare", 00:12:05.992 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:05.992 "is_configured": true, 00:12:05.992 "data_offset": 2048, 00:12:05.992 "data_size": 63488 00:12:05.992 }, 00:12:05.992 { 00:12:05.992 "name": null, 00:12:05.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.992 "is_configured": false, 00:12:05.992 "data_offset": 2048, 00:12:05.992 "data_size": 63488 00:12:05.992 }, 00:12:05.992 { 00:12:05.992 "name": "BaseBdev3", 00:12:05.992 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:05.992 "is_configured": true, 00:12:05.992 "data_offset": 2048, 00:12:05.992 "data_size": 63488 00:12:05.992 }, 00:12:05.992 { 00:12:05.992 "name": "BaseBdev4", 00:12:05.992 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:05.992 "is_configured": true, 00:12:05.992 "data_offset": 2048, 00:12:05.992 "data_size": 63488 00:12:05.992 } 00:12:05.992 ] 00:12:05.992 }' 00:12:05.992 23:51:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.992 23:51:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.289 "name": "raid_bdev1", 00:12:06.289 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:06.289 "strip_size_kb": 0, 00:12:06.289 "state": "online", 00:12:06.289 "raid_level": "raid1", 00:12:06.289 "superblock": true, 00:12:06.289 "num_base_bdevs": 4, 00:12:06.289 "num_base_bdevs_discovered": 3, 00:12:06.289 "num_base_bdevs_operational": 3, 00:12:06.289 "base_bdevs_list": [ 00:12:06.289 { 00:12:06.289 "name": "spare", 00:12:06.289 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:06.289 "is_configured": true, 00:12:06.289 "data_offset": 2048, 00:12:06.289 "data_size": 63488 00:12:06.289 }, 00:12:06.289 { 00:12:06.289 "name": null, 00:12:06.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.289 "is_configured": false, 00:12:06.289 "data_offset": 2048, 00:12:06.289 "data_size": 63488 00:12:06.289 }, 00:12:06.289 { 00:12:06.289 "name": "BaseBdev3", 00:12:06.289 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:06.289 "is_configured": true, 00:12:06.289 "data_offset": 2048, 00:12:06.289 "data_size": 63488 00:12:06.289 }, 00:12:06.289 { 00:12:06.289 "name": "BaseBdev4", 00:12:06.289 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:06.289 "is_configured": true, 00:12:06.289 "data_offset": 2048, 00:12:06.289 "data_size": 63488 00:12:06.289 } 00:12:06.289 ] 00:12:06.289 }' 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.289 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.548 [2024-11-02 23:52:00.484293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.548 "name": "raid_bdev1", 00:12:06.548 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:06.548 "strip_size_kb": 0, 00:12:06.548 "state": "online", 00:12:06.548 "raid_level": "raid1", 00:12:06.548 "superblock": true, 00:12:06.548 "num_base_bdevs": 4, 00:12:06.548 "num_base_bdevs_discovered": 2, 00:12:06.548 "num_base_bdevs_operational": 2, 00:12:06.548 "base_bdevs_list": [ 00:12:06.548 { 00:12:06.548 "name": null, 00:12:06.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.548 "is_configured": false, 00:12:06.548 "data_offset": 0, 00:12:06.548 "data_size": 63488 00:12:06.548 }, 00:12:06.548 { 00:12:06.548 "name": null, 00:12:06.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.548 "is_configured": false, 00:12:06.548 "data_offset": 2048, 00:12:06.548 "data_size": 63488 00:12:06.548 }, 00:12:06.548 { 00:12:06.548 "name": "BaseBdev3", 00:12:06.548 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:06.548 "is_configured": true, 00:12:06.548 "data_offset": 2048, 00:12:06.548 "data_size": 63488 00:12:06.548 }, 00:12:06.548 { 00:12:06.548 "name": "BaseBdev4", 00:12:06.548 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:06.548 "is_configured": true, 00:12:06.548 "data_offset": 2048, 00:12:06.548 "data_size": 63488 00:12:06.548 } 00:12:06.548 ] 00:12:06.548 }' 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.548 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.117 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:07.117 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.117 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.117 [2024-11-02 23:52:00.907653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.117 [2024-11-02 23:52:00.907895] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:07.117 [2024-11-02 23:52:00.907915] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:07.117 [2024-11-02 23:52:00.907965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.117 [2024-11-02 23:52:00.911878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:12:07.117 23:52:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.117 23:52:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:07.117 [2024-11-02 23:52:00.913691] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.054 23:52:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.054 "name": "raid_bdev1", 00:12:08.054 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:08.054 "strip_size_kb": 0, 00:12:08.054 "state": "online", 00:12:08.054 "raid_level": "raid1", 00:12:08.054 "superblock": true, 00:12:08.054 "num_base_bdevs": 4, 00:12:08.054 "num_base_bdevs_discovered": 3, 00:12:08.054 "num_base_bdevs_operational": 3, 00:12:08.054 "process": { 00:12:08.054 "type": "rebuild", 00:12:08.054 "target": "spare", 00:12:08.054 "progress": { 00:12:08.054 "blocks": 20480, 00:12:08.054 "percent": 32 00:12:08.054 } 00:12:08.054 }, 00:12:08.054 "base_bdevs_list": [ 00:12:08.054 { 00:12:08.055 "name": "spare", 00:12:08.055 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:08.055 "is_configured": true, 00:12:08.055 "data_offset": 2048, 00:12:08.055 "data_size": 63488 00:12:08.055 }, 00:12:08.055 { 00:12:08.055 "name": null, 00:12:08.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.055 "is_configured": false, 00:12:08.055 "data_offset": 2048, 00:12:08.055 "data_size": 63488 00:12:08.055 }, 00:12:08.055 { 00:12:08.055 "name": "BaseBdev3", 00:12:08.055 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:08.055 "is_configured": true, 00:12:08.055 "data_offset": 2048, 00:12:08.055 "data_size": 63488 00:12:08.055 }, 00:12:08.055 { 00:12:08.055 "name": "BaseBdev4", 00:12:08.055 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:08.055 "is_configured": true, 00:12:08.055 "data_offset": 2048, 00:12:08.055 "data_size": 63488 00:12:08.055 } 00:12:08.055 ] 00:12:08.055 }' 00:12:08.055 23:52:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.055 [2024-11-02 23:52:02.074688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.055 [2024-11-02 23:52:02.118178] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:08.055 [2024-11-02 23:52:02.118253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.055 [2024-11-02 23:52:02.118284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.055 [2024-11-02 23:52:02.118293] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.055 23:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.314 23:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.314 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.314 "name": "raid_bdev1", 00:12:08.314 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:08.314 "strip_size_kb": 0, 00:12:08.314 "state": "online", 00:12:08.314 "raid_level": "raid1", 00:12:08.314 "superblock": true, 00:12:08.314 "num_base_bdevs": 4, 00:12:08.314 "num_base_bdevs_discovered": 2, 00:12:08.314 "num_base_bdevs_operational": 2, 00:12:08.314 "base_bdevs_list": [ 00:12:08.314 { 00:12:08.314 "name": null, 00:12:08.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.314 "is_configured": false, 00:12:08.314 "data_offset": 0, 00:12:08.314 "data_size": 63488 00:12:08.314 }, 00:12:08.314 { 00:12:08.314 "name": null, 00:12:08.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.314 "is_configured": false, 00:12:08.314 "data_offset": 2048, 00:12:08.314 "data_size": 63488 00:12:08.314 }, 00:12:08.314 { 00:12:08.314 "name": "BaseBdev3", 00:12:08.314 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:08.314 "is_configured": true, 00:12:08.314 "data_offset": 2048, 00:12:08.314 "data_size": 63488 00:12:08.314 }, 00:12:08.314 { 00:12:08.314 "name": "BaseBdev4", 00:12:08.314 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:08.314 "is_configured": true, 00:12:08.314 "data_offset": 2048, 00:12:08.314 "data_size": 63488 00:12:08.314 } 00:12:08.314 ] 00:12:08.314 }' 00:12:08.315 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.315 23:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.575 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:08.575 23:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.575 23:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.575 [2024-11-02 23:52:02.602054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:08.575 [2024-11-02 23:52:02.602120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.575 [2024-11-02 23:52:02.602162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:08.575 [2024-11-02 23:52:02.602173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.575 [2024-11-02 23:52:02.602639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.575 [2024-11-02 23:52:02.602669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:08.575 [2024-11-02 23:52:02.602780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:08.575 [2024-11-02 23:52:02.602798] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:08.575 [2024-11-02 23:52:02.602811] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:08.575 [2024-11-02 23:52:02.602834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:08.575 [2024-11-02 23:52:02.606889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:12:08.575 spare 00:12:08.575 23:52:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.575 23:52:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:08.575 [2024-11-02 23:52:02.608802] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.956 "name": "raid_bdev1", 00:12:09.956 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:09.956 "strip_size_kb": 0, 00:12:09.956 "state": "online", 00:12:09.956 "raid_level": "raid1", 00:12:09.956 "superblock": true, 00:12:09.956 "num_base_bdevs": 4, 00:12:09.956 "num_base_bdevs_discovered": 3, 00:12:09.956 "num_base_bdevs_operational": 3, 00:12:09.956 "process": { 00:12:09.956 "type": "rebuild", 00:12:09.956 "target": "spare", 00:12:09.956 "progress": { 00:12:09.956 "blocks": 20480, 00:12:09.956 "percent": 32 00:12:09.956 } 00:12:09.956 }, 00:12:09.956 "base_bdevs_list": [ 00:12:09.956 { 00:12:09.956 "name": "spare", 00:12:09.956 "uuid": "653e2670-4579-501d-91e9-5bc6893df694", 00:12:09.956 "is_configured": true, 00:12:09.956 "data_offset": 2048, 00:12:09.956 "data_size": 63488 00:12:09.956 }, 00:12:09.956 { 00:12:09.956 "name": null, 00:12:09.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.956 "is_configured": false, 00:12:09.956 "data_offset": 2048, 00:12:09.956 "data_size": 63488 00:12:09.956 }, 00:12:09.956 { 00:12:09.956 "name": "BaseBdev3", 00:12:09.956 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:09.956 "is_configured": true, 00:12:09.956 "data_offset": 2048, 00:12:09.956 "data_size": 63488 00:12:09.956 }, 00:12:09.956 { 00:12:09.956 "name": "BaseBdev4", 00:12:09.956 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:09.956 "is_configured": true, 00:12:09.956 "data_offset": 2048, 00:12:09.956 "data_size": 63488 00:12:09.956 } 00:12:09.956 ] 00:12:09.956 }' 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.956 [2024-11-02 23:52:03.769215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:09.956 [2024-11-02 23:52:03.813543] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:09.956 [2024-11-02 23:52:03.813610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.956 [2024-11-02 23:52:03.813629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:09.956 [2024-11-02 23:52:03.813635] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.956 "name": "raid_bdev1", 00:12:09.956 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:09.956 "strip_size_kb": 0, 00:12:09.956 "state": "online", 00:12:09.956 "raid_level": "raid1", 00:12:09.956 "superblock": true, 00:12:09.956 "num_base_bdevs": 4, 00:12:09.956 "num_base_bdevs_discovered": 2, 00:12:09.956 "num_base_bdevs_operational": 2, 00:12:09.956 "base_bdevs_list": [ 00:12:09.956 { 00:12:09.956 "name": null, 00:12:09.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.956 "is_configured": false, 00:12:09.956 "data_offset": 0, 00:12:09.956 "data_size": 63488 00:12:09.956 }, 00:12:09.956 { 00:12:09.956 "name": null, 00:12:09.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.956 "is_configured": false, 00:12:09.956 "data_offset": 2048, 00:12:09.956 "data_size": 63488 00:12:09.956 }, 00:12:09.956 { 00:12:09.956 "name": "BaseBdev3", 00:12:09.956 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:09.956 "is_configured": true, 00:12:09.956 "data_offset": 2048, 00:12:09.956 "data_size": 63488 00:12:09.956 }, 00:12:09.956 { 00:12:09.956 "name": "BaseBdev4", 00:12:09.956 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:09.956 "is_configured": true, 00:12:09.956 "data_offset": 2048, 00:12:09.956 "data_size": 63488 00:12:09.956 } 00:12:09.956 ] 00:12:09.956 }' 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.956 23:52:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.216 "name": "raid_bdev1", 00:12:10.216 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:10.216 "strip_size_kb": 0, 00:12:10.216 "state": "online", 00:12:10.216 "raid_level": "raid1", 00:12:10.216 "superblock": true, 00:12:10.216 "num_base_bdevs": 4, 00:12:10.216 "num_base_bdevs_discovered": 2, 00:12:10.216 "num_base_bdevs_operational": 2, 00:12:10.216 "base_bdevs_list": [ 00:12:10.216 { 00:12:10.216 "name": null, 00:12:10.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.216 "is_configured": false, 00:12:10.216 "data_offset": 0, 00:12:10.216 "data_size": 63488 00:12:10.216 }, 00:12:10.216 { 00:12:10.216 "name": null, 00:12:10.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.216 "is_configured": false, 00:12:10.216 "data_offset": 2048, 00:12:10.216 "data_size": 63488 00:12:10.216 }, 00:12:10.216 { 00:12:10.216 "name": "BaseBdev3", 00:12:10.216 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:10.216 "is_configured": true, 00:12:10.216 "data_offset": 2048, 00:12:10.216 "data_size": 63488 00:12:10.216 }, 00:12:10.216 { 00:12:10.216 "name": "BaseBdev4", 00:12:10.216 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:10.216 "is_configured": true, 00:12:10.216 "data_offset": 2048, 00:12:10.216 "data_size": 63488 00:12:10.216 } 00:12:10.216 ] 00:12:10.216 }' 00:12:10.216 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.476 [2024-11-02 23:52:04.388964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:10.476 [2024-11-02 23:52:04.389031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.476 [2024-11-02 23:52:04.389081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:10.476 [2024-11-02 23:52:04.389090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.476 [2024-11-02 23:52:04.389473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.476 [2024-11-02 23:52:04.389496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.476 [2024-11-02 23:52:04.389568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:10.476 [2024-11-02 23:52:04.389586] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:10.476 [2024-11-02 23:52:04.389599] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:10.476 [2024-11-02 23:52:04.389609] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:10.476 BaseBdev1 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.476 23:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.415 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.415 "name": "raid_bdev1", 00:12:11.415 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:11.415 "strip_size_kb": 0, 00:12:11.415 "state": "online", 00:12:11.415 "raid_level": "raid1", 00:12:11.415 "superblock": true, 00:12:11.415 "num_base_bdevs": 4, 00:12:11.415 "num_base_bdevs_discovered": 2, 00:12:11.415 "num_base_bdevs_operational": 2, 00:12:11.415 "base_bdevs_list": [ 00:12:11.415 { 00:12:11.415 "name": null, 00:12:11.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.415 "is_configured": false, 00:12:11.415 "data_offset": 0, 00:12:11.415 "data_size": 63488 00:12:11.415 }, 00:12:11.415 { 00:12:11.415 "name": null, 00:12:11.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.415 "is_configured": false, 00:12:11.415 "data_offset": 2048, 00:12:11.415 "data_size": 63488 00:12:11.416 }, 00:12:11.416 { 00:12:11.416 "name": "BaseBdev3", 00:12:11.416 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:11.416 "is_configured": true, 00:12:11.416 "data_offset": 2048, 00:12:11.416 "data_size": 63488 00:12:11.416 }, 00:12:11.416 { 00:12:11.416 "name": "BaseBdev4", 00:12:11.416 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:11.416 "is_configured": true, 00:12:11.416 "data_offset": 2048, 00:12:11.416 "data_size": 63488 00:12:11.416 } 00:12:11.416 ] 00:12:11.416 }' 00:12:11.416 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.416 23:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.985 "name": "raid_bdev1", 00:12:11.985 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:11.985 "strip_size_kb": 0, 00:12:11.985 "state": "online", 00:12:11.985 "raid_level": "raid1", 00:12:11.985 "superblock": true, 00:12:11.985 "num_base_bdevs": 4, 00:12:11.985 "num_base_bdevs_discovered": 2, 00:12:11.985 "num_base_bdevs_operational": 2, 00:12:11.985 "base_bdevs_list": [ 00:12:11.985 { 00:12:11.985 "name": null, 00:12:11.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.985 "is_configured": false, 00:12:11.985 "data_offset": 0, 00:12:11.985 "data_size": 63488 00:12:11.985 }, 00:12:11.985 { 00:12:11.985 "name": null, 00:12:11.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.985 "is_configured": false, 00:12:11.985 "data_offset": 2048, 00:12:11.985 "data_size": 63488 00:12:11.985 }, 00:12:11.985 { 00:12:11.985 "name": "BaseBdev3", 00:12:11.985 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:11.985 "is_configured": true, 00:12:11.985 "data_offset": 2048, 00:12:11.985 "data_size": 63488 00:12:11.985 }, 00:12:11.985 { 00:12:11.985 "name": "BaseBdev4", 00:12:11.985 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:11.985 "is_configured": true, 00:12:11.985 "data_offset": 2048, 00:12:11.985 "data_size": 63488 00:12:11.985 } 00:12:11.985 ] 00:12:11.985 }' 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:11.985 23:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.985 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.985 [2024-11-02 23:52:06.022224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.985 [2024-11-02 23:52:06.022409] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:11.985 [2024-11-02 23:52:06.022423] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:11.985 request: 00:12:11.985 { 00:12:11.985 "base_bdev": "BaseBdev1", 00:12:11.985 "raid_bdev": "raid_bdev1", 00:12:11.986 "method": "bdev_raid_add_base_bdev", 00:12:11.986 "req_id": 1 00:12:11.986 } 00:12:11.986 Got JSON-RPC error response 00:12:11.986 response: 00:12:11.986 { 00:12:11.986 "code": -22, 00:12:11.986 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:11.986 } 00:12:11.986 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:11.986 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:11.986 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:11.986 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:11.986 23:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:11.986 23:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.366 "name": "raid_bdev1", 00:12:13.366 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:13.366 "strip_size_kb": 0, 00:12:13.366 "state": "online", 00:12:13.366 "raid_level": "raid1", 00:12:13.366 "superblock": true, 00:12:13.366 "num_base_bdevs": 4, 00:12:13.366 "num_base_bdevs_discovered": 2, 00:12:13.366 "num_base_bdevs_operational": 2, 00:12:13.366 "base_bdevs_list": [ 00:12:13.366 { 00:12:13.366 "name": null, 00:12:13.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.366 "is_configured": false, 00:12:13.366 "data_offset": 0, 00:12:13.366 "data_size": 63488 00:12:13.366 }, 00:12:13.366 { 00:12:13.366 "name": null, 00:12:13.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.366 "is_configured": false, 00:12:13.366 "data_offset": 2048, 00:12:13.366 "data_size": 63488 00:12:13.366 }, 00:12:13.366 { 00:12:13.366 "name": "BaseBdev3", 00:12:13.366 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:13.366 "is_configured": true, 00:12:13.366 "data_offset": 2048, 00:12:13.366 "data_size": 63488 00:12:13.366 }, 00:12:13.366 { 00:12:13.366 "name": "BaseBdev4", 00:12:13.366 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:13.366 "is_configured": true, 00:12:13.366 "data_offset": 2048, 00:12:13.366 "data_size": 63488 00:12:13.366 } 00:12:13.366 ] 00:12:13.366 }' 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.366 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.626 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.626 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.626 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.626 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.626 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.626 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.626 "name": "raid_bdev1", 00:12:13.626 "uuid": "841c9805-3b75-40f3-8f7f-1ab76671e633", 00:12:13.626 "strip_size_kb": 0, 00:12:13.626 "state": "online", 00:12:13.626 "raid_level": "raid1", 00:12:13.626 "superblock": true, 00:12:13.626 "num_base_bdevs": 4, 00:12:13.626 "num_base_bdevs_discovered": 2, 00:12:13.626 "num_base_bdevs_operational": 2, 00:12:13.626 "base_bdevs_list": [ 00:12:13.626 { 00:12:13.626 "name": null, 00:12:13.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.626 "is_configured": false, 00:12:13.626 "data_offset": 0, 00:12:13.626 "data_size": 63488 00:12:13.626 }, 00:12:13.626 { 00:12:13.626 "name": null, 00:12:13.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.626 "is_configured": false, 00:12:13.626 "data_offset": 2048, 00:12:13.626 "data_size": 63488 00:12:13.627 }, 00:12:13.627 { 00:12:13.627 "name": "BaseBdev3", 00:12:13.627 "uuid": "bf5d8e17-503e-5a43-aa63-bc9d25813b43", 00:12:13.627 "is_configured": true, 00:12:13.627 "data_offset": 2048, 00:12:13.627 "data_size": 63488 00:12:13.627 }, 00:12:13.627 { 00:12:13.627 "name": "BaseBdev4", 00:12:13.627 "uuid": "658b5d17-faa5-5c7b-b713-f574440d457c", 00:12:13.627 "is_configured": true, 00:12:13.627 "data_offset": 2048, 00:12:13.627 "data_size": 63488 00:12:13.627 } 00:12:13.627 ] 00:12:13.627 }' 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88420 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 88420 ']' 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 88420 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88420 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88420' 00:12:13.627 killing process with pid 88420 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 88420 00:12:13.627 Received shutdown signal, test time was about 60.000000 seconds 00:12:13.627 00:12:13.627 Latency(us) 00:12:13.627 [2024-11-02T23:52:07.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.627 [2024-11-02T23:52:07.722Z] =================================================================================================================== 00:12:13.627 [2024-11-02T23:52:07.722Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:13.627 [2024-11-02 23:52:07.612407] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.627 [2024-11-02 23:52:07.612529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.627 [2024-11-02 23:52:07.612612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.627 [2024-11-02 23:52:07.612633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:13.627 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 88420 00:12:13.627 [2024-11-02 23:52:07.663234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:13.886 00:12:13.886 real 0m22.952s 00:12:13.886 user 0m28.135s 00:12:13.886 sys 0m3.665s 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.886 ************************************ 00:12:13.886 END TEST raid_rebuild_test_sb 00:12:13.886 ************************************ 00:12:13.886 23:52:07 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:13.886 23:52:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:13.886 23:52:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.886 23:52:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.886 ************************************ 00:12:13.886 START TEST raid_rebuild_test_io 00:12:13.886 ************************************ 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89150 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89150 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 89150 ']' 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:13.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:13.886 23:52:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.145 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:14.145 Zero copy mechanism will not be used. 00:12:14.145 [2024-11-02 23:52:08.053656] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:12:14.145 [2024-11-02 23:52:08.053813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89150 ] 00:12:14.145 [2024-11-02 23:52:08.208901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.145 [2024-11-02 23:52:08.234560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.405 [2024-11-02 23:52:08.277002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.405 [2024-11-02 23:52:08.277046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.974 BaseBdev1_malloc 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.974 [2024-11-02 23:52:08.910743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:14.974 [2024-11-02 23:52:08.910808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.974 [2024-11-02 23:52:08.910832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:14.974 [2024-11-02 23:52:08.910846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.974 [2024-11-02 23:52:08.912910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.974 [2024-11-02 23:52:08.912945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:14.974 BaseBdev1 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.974 BaseBdev2_malloc 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.974 [2024-11-02 23:52:08.939306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:14.974 [2024-11-02 23:52:08.939353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.974 [2024-11-02 23:52:08.939372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:14.974 [2024-11-02 23:52:08.939380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.974 [2024-11-02 23:52:08.941395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.974 [2024-11-02 23:52:08.941435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:14.974 BaseBdev2 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.974 BaseBdev3_malloc 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.974 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:14.975 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.975 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.975 [2024-11-02 23:52:08.968010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:14.975 [2024-11-02 23:52:08.968071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.975 [2024-11-02 23:52:08.968095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:14.975 [2024-11-02 23:52:08.968103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.975 [2024-11-02 23:52:08.970104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.975 [2024-11-02 23:52:08.970137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:14.975 BaseBdev3 00:12:14.975 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.975 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:14.975 23:52:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:14.975 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.975 23:52:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.975 BaseBdev4_malloc 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.975 [2024-11-02 23:52:09.012065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:14.975 [2024-11-02 23:52:09.012139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.975 [2024-11-02 23:52:09.012174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:14.975 [2024-11-02 23:52:09.012191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.975 [2024-11-02 23:52:09.015693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.975 [2024-11-02 23:52:09.015766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:14.975 BaseBdev4 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.975 spare_malloc 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.975 spare_delay 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.975 [2024-11-02 23:52:09.052939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:14.975 [2024-11-02 23:52:09.052982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.975 [2024-11-02 23:52:09.052999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:14.975 [2024-11-02 23:52:09.053007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.975 [2024-11-02 23:52:09.055043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.975 [2024-11-02 23:52:09.055076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:14.975 spare 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.975 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.975 [2024-11-02 23:52:09.065006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.975 [2024-11-02 23:52:09.066833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.975 [2024-11-02 23:52:09.066897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.975 [2024-11-02 23:52:09.066945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:14.975 [2024-11-02 23:52:09.067028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:14.975 [2024-11-02 23:52:09.067046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:15.235 [2024-11-02 23:52:09.067302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:15.235 [2024-11-02 23:52:09.067434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:15.235 [2024-11-02 23:52:09.067461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:15.235 [2024-11-02 23:52:09.067594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.235 "name": "raid_bdev1", 00:12:15.235 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:15.235 "strip_size_kb": 0, 00:12:15.235 "state": "online", 00:12:15.235 "raid_level": "raid1", 00:12:15.235 "superblock": false, 00:12:15.235 "num_base_bdevs": 4, 00:12:15.235 "num_base_bdevs_discovered": 4, 00:12:15.235 "num_base_bdevs_operational": 4, 00:12:15.235 "base_bdevs_list": [ 00:12:15.235 { 00:12:15.235 "name": "BaseBdev1", 00:12:15.235 "uuid": "1e59a6c4-7942-5c6a-95b2-c9c7d96141fb", 00:12:15.235 "is_configured": true, 00:12:15.235 "data_offset": 0, 00:12:15.235 "data_size": 65536 00:12:15.235 }, 00:12:15.235 { 00:12:15.235 "name": "BaseBdev2", 00:12:15.235 "uuid": "9588f146-527a-57e6-adcf-adbaa32e29e1", 00:12:15.235 "is_configured": true, 00:12:15.235 "data_offset": 0, 00:12:15.235 "data_size": 65536 00:12:15.235 }, 00:12:15.235 { 00:12:15.235 "name": "BaseBdev3", 00:12:15.235 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:15.235 "is_configured": true, 00:12:15.235 "data_offset": 0, 00:12:15.235 "data_size": 65536 00:12:15.235 }, 00:12:15.235 { 00:12:15.235 "name": "BaseBdev4", 00:12:15.235 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:15.235 "is_configured": true, 00:12:15.235 "data_offset": 0, 00:12:15.235 "data_size": 65536 00:12:15.235 } 00:12:15.235 ] 00:12:15.235 }' 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.235 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.495 [2024-11-02 23:52:09.520523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:15.495 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.754 [2024-11-02 23:52:09.616042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.754 "name": "raid_bdev1", 00:12:15.754 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:15.754 "strip_size_kb": 0, 00:12:15.754 "state": "online", 00:12:15.754 "raid_level": "raid1", 00:12:15.754 "superblock": false, 00:12:15.754 "num_base_bdevs": 4, 00:12:15.754 "num_base_bdevs_discovered": 3, 00:12:15.754 "num_base_bdevs_operational": 3, 00:12:15.754 "base_bdevs_list": [ 00:12:15.754 { 00:12:15.754 "name": null, 00:12:15.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.754 "is_configured": false, 00:12:15.754 "data_offset": 0, 00:12:15.754 "data_size": 65536 00:12:15.754 }, 00:12:15.754 { 00:12:15.754 "name": "BaseBdev2", 00:12:15.754 "uuid": "9588f146-527a-57e6-adcf-adbaa32e29e1", 00:12:15.754 "is_configured": true, 00:12:15.754 "data_offset": 0, 00:12:15.754 "data_size": 65536 00:12:15.754 }, 00:12:15.754 { 00:12:15.754 "name": "BaseBdev3", 00:12:15.754 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:15.754 "is_configured": true, 00:12:15.754 "data_offset": 0, 00:12:15.754 "data_size": 65536 00:12:15.754 }, 00:12:15.754 { 00:12:15.754 "name": "BaseBdev4", 00:12:15.754 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:15.754 "is_configured": true, 00:12:15.754 "data_offset": 0, 00:12:15.754 "data_size": 65536 00:12:15.754 } 00:12:15.754 ] 00:12:15.754 }' 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.754 23:52:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.754 [2024-11-02 23:52:09.685857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:15.754 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:15.754 Zero copy mechanism will not be used. 00:12:15.754 Running I/O for 60 seconds... 00:12:16.033 23:52:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:16.033 23:52:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.033 23:52:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.033 [2024-11-02 23:52:10.052986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:16.033 23:52:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.033 23:52:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:16.033 [2024-11-02 23:52:10.112775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:16.033 [2024-11-02 23:52:10.114829] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:16.292 [2024-11-02 23:52:10.252084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:16.551 [2024-11-02 23:52:10.471712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:16.551 [2024-11-02 23:52:10.472426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:17.110 172.00 IOPS, 516.00 MiB/s [2024-11-02T23:52:11.205Z] [2024-11-02 23:52:10.951921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.110 "name": "raid_bdev1", 00:12:17.110 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:17.110 "strip_size_kb": 0, 00:12:17.110 "state": "online", 00:12:17.110 "raid_level": "raid1", 00:12:17.110 "superblock": false, 00:12:17.110 "num_base_bdevs": 4, 00:12:17.110 "num_base_bdevs_discovered": 4, 00:12:17.110 "num_base_bdevs_operational": 4, 00:12:17.110 "process": { 00:12:17.110 "type": "rebuild", 00:12:17.110 "target": "spare", 00:12:17.110 "progress": { 00:12:17.110 "blocks": 10240, 00:12:17.110 "percent": 15 00:12:17.110 } 00:12:17.110 }, 00:12:17.110 "base_bdevs_list": [ 00:12:17.110 { 00:12:17.110 "name": "spare", 00:12:17.110 "uuid": "004e514b-9e0b-50f3-9a17-8659d473dd8b", 00:12:17.110 "is_configured": true, 00:12:17.110 "data_offset": 0, 00:12:17.110 "data_size": 65536 00:12:17.110 }, 00:12:17.110 { 00:12:17.110 "name": "BaseBdev2", 00:12:17.110 "uuid": "9588f146-527a-57e6-adcf-adbaa32e29e1", 00:12:17.110 "is_configured": true, 00:12:17.110 "data_offset": 0, 00:12:17.110 "data_size": 65536 00:12:17.110 }, 00:12:17.110 { 00:12:17.110 "name": "BaseBdev3", 00:12:17.110 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:17.110 "is_configured": true, 00:12:17.110 "data_offset": 0, 00:12:17.110 "data_size": 65536 00:12:17.110 }, 00:12:17.110 { 00:12:17.110 "name": "BaseBdev4", 00:12:17.110 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:17.110 "is_configured": true, 00:12:17.110 "data_offset": 0, 00:12:17.110 "data_size": 65536 00:12:17.110 } 00:12:17.110 ] 00:12:17.110 }' 00:12:17.110 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.391 [2024-11-02 23:52:11.247280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.391 [2024-11-02 23:52:11.303613] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:17.391 [2024-11-02 23:52:11.410910] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:17.391 [2024-11-02 23:52:11.427721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.391 [2024-11-02 23:52:11.427815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.391 [2024-11-02 23:52:11.427844] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:17.391 [2024-11-02 23:52:11.451027] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.391 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.392 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.392 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.392 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.392 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.392 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.392 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.652 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.652 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.652 "name": "raid_bdev1", 00:12:17.652 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:17.652 "strip_size_kb": 0, 00:12:17.652 "state": "online", 00:12:17.652 "raid_level": "raid1", 00:12:17.652 "superblock": false, 00:12:17.652 "num_base_bdevs": 4, 00:12:17.652 "num_base_bdevs_discovered": 3, 00:12:17.652 "num_base_bdevs_operational": 3, 00:12:17.652 "base_bdevs_list": [ 00:12:17.652 { 00:12:17.652 "name": null, 00:12:17.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.652 "is_configured": false, 00:12:17.652 "data_offset": 0, 00:12:17.652 "data_size": 65536 00:12:17.652 }, 00:12:17.652 { 00:12:17.652 "name": "BaseBdev2", 00:12:17.652 "uuid": "9588f146-527a-57e6-adcf-adbaa32e29e1", 00:12:17.652 "is_configured": true, 00:12:17.652 "data_offset": 0, 00:12:17.652 "data_size": 65536 00:12:17.652 }, 00:12:17.652 { 00:12:17.652 "name": "BaseBdev3", 00:12:17.652 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:17.652 "is_configured": true, 00:12:17.652 "data_offset": 0, 00:12:17.652 "data_size": 65536 00:12:17.652 }, 00:12:17.652 { 00:12:17.652 "name": "BaseBdev4", 00:12:17.652 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:17.652 "is_configured": true, 00:12:17.652 "data_offset": 0, 00:12:17.652 "data_size": 65536 00:12:17.652 } 00:12:17.652 ] 00:12:17.652 }' 00:12:17.652 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.652 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.911 143.00 IOPS, 429.00 MiB/s [2024-11-02T23:52:12.006Z] 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.911 "name": "raid_bdev1", 00:12:17.911 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:17.911 "strip_size_kb": 0, 00:12:17.911 "state": "online", 00:12:17.911 "raid_level": "raid1", 00:12:17.911 "superblock": false, 00:12:17.911 "num_base_bdevs": 4, 00:12:17.911 "num_base_bdevs_discovered": 3, 00:12:17.911 "num_base_bdevs_operational": 3, 00:12:17.911 "base_bdevs_list": [ 00:12:17.911 { 00:12:17.911 "name": null, 00:12:17.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.911 "is_configured": false, 00:12:17.911 "data_offset": 0, 00:12:17.911 "data_size": 65536 00:12:17.911 }, 00:12:17.911 { 00:12:17.911 "name": "BaseBdev2", 00:12:17.911 "uuid": "9588f146-527a-57e6-adcf-adbaa32e29e1", 00:12:17.911 "is_configured": true, 00:12:17.911 "data_offset": 0, 00:12:17.911 "data_size": 65536 00:12:17.911 }, 00:12:17.911 { 00:12:17.911 "name": "BaseBdev3", 00:12:17.911 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:17.911 "is_configured": true, 00:12:17.911 "data_offset": 0, 00:12:17.911 "data_size": 65536 00:12:17.911 }, 00:12:17.911 { 00:12:17.911 "name": "BaseBdev4", 00:12:17.911 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:17.911 "is_configured": true, 00:12:17.911 "data_offset": 0, 00:12:17.911 "data_size": 65536 00:12:17.911 } 00:12:17.911 ] 00:12:17.911 }' 00:12:17.911 23:52:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.170 23:52:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.170 23:52:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.170 23:52:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.170 23:52:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:18.170 23:52:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.170 23:52:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.170 [2024-11-02 23:52:12.069200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:18.170 23:52:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.170 23:52:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:18.170 [2024-11-02 23:52:12.112187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:18.171 [2024-11-02 23:52:12.114112] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:18.171 [2024-11-02 23:52:12.230129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:18.171 [2024-11-02 23:52:12.231284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:18.429 [2024-11-02 23:52:12.451754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:18.429 [2024-11-02 23:52:12.452406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:18.948 144.00 IOPS, 432.00 MiB/s [2024-11-02T23:52:13.043Z] [2024-11-02 23:52:12.788052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:18.948 [2024-11-02 23:52:12.902088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:18.948 [2024-11-02 23:52:12.902783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.208 "name": "raid_bdev1", 00:12:19.208 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:19.208 "strip_size_kb": 0, 00:12:19.208 "state": "online", 00:12:19.208 "raid_level": "raid1", 00:12:19.208 "superblock": false, 00:12:19.208 "num_base_bdevs": 4, 00:12:19.208 "num_base_bdevs_discovered": 4, 00:12:19.208 "num_base_bdevs_operational": 4, 00:12:19.208 "process": { 00:12:19.208 "type": "rebuild", 00:12:19.208 "target": "spare", 00:12:19.208 "progress": { 00:12:19.208 "blocks": 10240, 00:12:19.208 "percent": 15 00:12:19.208 } 00:12:19.208 }, 00:12:19.208 "base_bdevs_list": [ 00:12:19.208 { 00:12:19.208 "name": "spare", 00:12:19.208 "uuid": "004e514b-9e0b-50f3-9a17-8659d473dd8b", 00:12:19.208 "is_configured": true, 00:12:19.208 "data_offset": 0, 00:12:19.208 "data_size": 65536 00:12:19.208 }, 00:12:19.208 { 00:12:19.208 "name": "BaseBdev2", 00:12:19.208 "uuid": "9588f146-527a-57e6-adcf-adbaa32e29e1", 00:12:19.208 "is_configured": true, 00:12:19.208 "data_offset": 0, 00:12:19.208 "data_size": 65536 00:12:19.208 }, 00:12:19.208 { 00:12:19.208 "name": "BaseBdev3", 00:12:19.208 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:19.208 "is_configured": true, 00:12:19.208 "data_offset": 0, 00:12:19.208 "data_size": 65536 00:12:19.208 }, 00:12:19.208 { 00:12:19.208 "name": "BaseBdev4", 00:12:19.208 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:19.208 "is_configured": true, 00:12:19.208 "data_offset": 0, 00:12:19.208 "data_size": 65536 00:12:19.208 } 00:12:19.208 ] 00:12:19.208 }' 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.208 [2024-11-02 23:52:13.241604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.208 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.208 [2024-11-02 23:52:13.269254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.468 [2024-11-02 23:52:13.357661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:19.468 [2024-11-02 23:52:13.459917] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:19.468 [2024-11-02 23:52:13.460020] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:19.468 [2024-11-02 23:52:13.461848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.468 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.469 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.469 "name": "raid_bdev1", 00:12:19.469 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:19.469 "strip_size_kb": 0, 00:12:19.469 "state": "online", 00:12:19.469 "raid_level": "raid1", 00:12:19.469 "superblock": false, 00:12:19.469 "num_base_bdevs": 4, 00:12:19.469 "num_base_bdevs_discovered": 3, 00:12:19.469 "num_base_bdevs_operational": 3, 00:12:19.469 "process": { 00:12:19.469 "type": "rebuild", 00:12:19.469 "target": "spare", 00:12:19.469 "progress": { 00:12:19.469 "blocks": 16384, 00:12:19.469 "percent": 25 00:12:19.469 } 00:12:19.469 }, 00:12:19.469 "base_bdevs_list": [ 00:12:19.469 { 00:12:19.469 "name": "spare", 00:12:19.469 "uuid": "004e514b-9e0b-50f3-9a17-8659d473dd8b", 00:12:19.469 "is_configured": true, 00:12:19.469 "data_offset": 0, 00:12:19.469 "data_size": 65536 00:12:19.469 }, 00:12:19.469 { 00:12:19.469 "name": null, 00:12:19.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.469 "is_configured": false, 00:12:19.469 "data_offset": 0, 00:12:19.469 "data_size": 65536 00:12:19.469 }, 00:12:19.469 { 00:12:19.469 "name": "BaseBdev3", 00:12:19.469 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:19.469 "is_configured": true, 00:12:19.469 "data_offset": 0, 00:12:19.469 "data_size": 65536 00:12:19.469 }, 00:12:19.469 { 00:12:19.469 "name": "BaseBdev4", 00:12:19.469 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:19.469 "is_configured": true, 00:12:19.469 "data_offset": 0, 00:12:19.469 "data_size": 65536 00:12:19.469 } 00:12:19.469 ] 00:12:19.469 }' 00:12:19.469 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.469 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.469 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=392 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.728 "name": "raid_bdev1", 00:12:19.728 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:19.728 "strip_size_kb": 0, 00:12:19.728 "state": "online", 00:12:19.728 "raid_level": "raid1", 00:12:19.728 "superblock": false, 00:12:19.728 "num_base_bdevs": 4, 00:12:19.728 "num_base_bdevs_discovered": 3, 00:12:19.728 "num_base_bdevs_operational": 3, 00:12:19.728 "process": { 00:12:19.728 "type": "rebuild", 00:12:19.728 "target": "spare", 00:12:19.728 "progress": { 00:12:19.728 "blocks": 18432, 00:12:19.728 "percent": 28 00:12:19.728 } 00:12:19.728 }, 00:12:19.728 "base_bdevs_list": [ 00:12:19.728 { 00:12:19.728 "name": "spare", 00:12:19.728 "uuid": "004e514b-9e0b-50f3-9a17-8659d473dd8b", 00:12:19.728 "is_configured": true, 00:12:19.728 "data_offset": 0, 00:12:19.728 "data_size": 65536 00:12:19.728 }, 00:12:19.728 { 00:12:19.728 "name": null, 00:12:19.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.728 "is_configured": false, 00:12:19.728 "data_offset": 0, 00:12:19.728 "data_size": 65536 00:12:19.728 }, 00:12:19.728 { 00:12:19.728 "name": "BaseBdev3", 00:12:19.728 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:19.728 "is_configured": true, 00:12:19.728 "data_offset": 0, 00:12:19.728 "data_size": 65536 00:12:19.728 }, 00:12:19.728 { 00:12:19.728 "name": "BaseBdev4", 00:12:19.728 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:19.728 "is_configured": true, 00:12:19.728 "data_offset": 0, 00:12:19.728 "data_size": 65536 00:12:19.728 } 00:12:19.728 ] 00:12:19.728 }' 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.728 126.00 IOPS, 378.00 MiB/s [2024-11-02T23:52:13.823Z] [2024-11-02 23:52:13.701325] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.728 23:52:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:20.666 113.20 IOPS, 339.60 MiB/s [2024-11-02T23:52:14.761Z] 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:20.666 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.666 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.666 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.666 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.666 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.666 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.666 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.666 23:52:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.666 23:52:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.666 [2024-11-02 23:52:14.743079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:20.940 23:52:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.940 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.940 "name": "raid_bdev1", 00:12:20.940 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:20.940 "strip_size_kb": 0, 00:12:20.940 "state": "online", 00:12:20.940 "raid_level": "raid1", 00:12:20.940 "superblock": false, 00:12:20.940 "num_base_bdevs": 4, 00:12:20.940 "num_base_bdevs_discovered": 3, 00:12:20.940 "num_base_bdevs_operational": 3, 00:12:20.940 "process": { 00:12:20.940 "type": "rebuild", 00:12:20.940 "target": "spare", 00:12:20.940 "progress": { 00:12:20.940 "blocks": 40960, 00:12:20.940 "percent": 62 00:12:20.940 } 00:12:20.940 }, 00:12:20.940 "base_bdevs_list": [ 00:12:20.940 { 00:12:20.940 "name": "spare", 00:12:20.940 "uuid": "004e514b-9e0b-50f3-9a17-8659d473dd8b", 00:12:20.940 "is_configured": true, 00:12:20.940 "data_offset": 0, 00:12:20.940 "data_size": 65536 00:12:20.940 }, 00:12:20.940 { 00:12:20.940 "name": null, 00:12:20.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.940 "is_configured": false, 00:12:20.940 "data_offset": 0, 00:12:20.940 "data_size": 65536 00:12:20.940 }, 00:12:20.940 { 00:12:20.940 "name": "BaseBdev3", 00:12:20.940 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:20.940 "is_configured": true, 00:12:20.940 "data_offset": 0, 00:12:20.940 "data_size": 65536 00:12:20.940 }, 00:12:20.940 { 00:12:20.940 "name": "BaseBdev4", 00:12:20.940 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:20.940 "is_configured": true, 00:12:20.940 "data_offset": 0, 00:12:20.940 "data_size": 65536 00:12:20.940 } 00:12:20.940 ] 00:12:20.940 }' 00:12:20.940 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.940 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.940 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.940 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.940 23:52:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:21.885 100.67 IOPS, 302.00 MiB/s [2024-11-02T23:52:15.980Z] [2024-11-02 23:52:15.737387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:21.885 [2024-11-02 23:52:15.737824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:21.885 [2024-11-02 23:52:15.859961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.885 "name": "raid_bdev1", 00:12:21.885 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:21.885 "strip_size_kb": 0, 00:12:21.885 "state": "online", 00:12:21.885 "raid_level": "raid1", 00:12:21.885 "superblock": false, 00:12:21.885 "num_base_bdevs": 4, 00:12:21.885 "num_base_bdevs_discovered": 3, 00:12:21.885 "num_base_bdevs_operational": 3, 00:12:21.885 "process": { 00:12:21.885 "type": "rebuild", 00:12:21.885 "target": "spare", 00:12:21.885 "progress": { 00:12:21.885 "blocks": 59392, 00:12:21.885 "percent": 90 00:12:21.885 } 00:12:21.885 }, 00:12:21.885 "base_bdevs_list": [ 00:12:21.885 { 00:12:21.885 "name": "spare", 00:12:21.885 "uuid": "004e514b-9e0b-50f3-9a17-8659d473dd8b", 00:12:21.885 "is_configured": true, 00:12:21.885 "data_offset": 0, 00:12:21.885 "data_size": 65536 00:12:21.885 }, 00:12:21.885 { 00:12:21.885 "name": null, 00:12:21.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.885 "is_configured": false, 00:12:21.885 "data_offset": 0, 00:12:21.885 "data_size": 65536 00:12:21.885 }, 00:12:21.885 { 00:12:21.885 "name": "BaseBdev3", 00:12:21.885 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:21.885 "is_configured": true, 00:12:21.885 "data_offset": 0, 00:12:21.885 "data_size": 65536 00:12:21.885 }, 00:12:21.885 { 00:12:21.885 "name": "BaseBdev4", 00:12:21.885 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:21.885 "is_configured": true, 00:12:21.885 "data_offset": 0, 00:12:21.885 "data_size": 65536 00:12:21.885 } 00:12:21.885 ] 00:12:21.885 }' 00:12:21.885 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.149 23:52:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.149 23:52:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.149 23:52:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.149 23:52:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:22.414 [2024-11-02 23:52:16.297489] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:22.414 [2024-11-02 23:52:16.402684] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:22.414 [2024-11-02 23:52:16.405054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.244 91.14 IOPS, 273.43 MiB/s [2024-11-02T23:52:17.339Z] 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.244 "name": "raid_bdev1", 00:12:23.244 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:23.244 "strip_size_kb": 0, 00:12:23.244 "state": "online", 00:12:23.244 "raid_level": "raid1", 00:12:23.244 "superblock": false, 00:12:23.244 "num_base_bdevs": 4, 00:12:23.244 "num_base_bdevs_discovered": 3, 00:12:23.244 "num_base_bdevs_operational": 3, 00:12:23.244 "base_bdevs_list": [ 00:12:23.244 { 00:12:23.244 "name": "spare", 00:12:23.244 "uuid": "004e514b-9e0b-50f3-9a17-8659d473dd8b", 00:12:23.244 "is_configured": true, 00:12:23.244 "data_offset": 0, 00:12:23.244 "data_size": 65536 00:12:23.244 }, 00:12:23.244 { 00:12:23.244 "name": null, 00:12:23.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.244 "is_configured": false, 00:12:23.244 "data_offset": 0, 00:12:23.244 "data_size": 65536 00:12:23.244 }, 00:12:23.244 { 00:12:23.244 "name": "BaseBdev3", 00:12:23.244 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:23.244 "is_configured": true, 00:12:23.244 "data_offset": 0, 00:12:23.244 "data_size": 65536 00:12:23.244 }, 00:12:23.244 { 00:12:23.244 "name": "BaseBdev4", 00:12:23.244 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:23.244 "is_configured": true, 00:12:23.244 "data_offset": 0, 00:12:23.244 "data_size": 65536 00:12:23.244 } 00:12:23.244 ] 00:12:23.244 }' 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:23.244 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.245 "name": "raid_bdev1", 00:12:23.245 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:23.245 "strip_size_kb": 0, 00:12:23.245 "state": "online", 00:12:23.245 "raid_level": "raid1", 00:12:23.245 "superblock": false, 00:12:23.245 "num_base_bdevs": 4, 00:12:23.245 "num_base_bdevs_discovered": 3, 00:12:23.245 "num_base_bdevs_operational": 3, 00:12:23.245 "base_bdevs_list": [ 00:12:23.245 { 00:12:23.245 "name": "spare", 00:12:23.245 "uuid": "004e514b-9e0b-50f3-9a17-8659d473dd8b", 00:12:23.245 "is_configured": true, 00:12:23.245 "data_offset": 0, 00:12:23.245 "data_size": 65536 00:12:23.245 }, 00:12:23.245 { 00:12:23.245 "name": null, 00:12:23.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.245 "is_configured": false, 00:12:23.245 "data_offset": 0, 00:12:23.245 "data_size": 65536 00:12:23.245 }, 00:12:23.245 { 00:12:23.245 "name": "BaseBdev3", 00:12:23.245 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:23.245 "is_configured": true, 00:12:23.245 "data_offset": 0, 00:12:23.245 "data_size": 65536 00:12:23.245 }, 00:12:23.245 { 00:12:23.245 "name": "BaseBdev4", 00:12:23.245 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:23.245 "is_configured": true, 00:12:23.245 "data_offset": 0, 00:12:23.245 "data_size": 65536 00:12:23.245 } 00:12:23.245 ] 00:12:23.245 }' 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.245 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.504 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.504 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.504 "name": "raid_bdev1", 00:12:23.504 "uuid": "9e2acf0c-670f-41e4-ad45-53232e08940f", 00:12:23.504 "strip_size_kb": 0, 00:12:23.504 "state": "online", 00:12:23.504 "raid_level": "raid1", 00:12:23.504 "superblock": false, 00:12:23.504 "num_base_bdevs": 4, 00:12:23.504 "num_base_bdevs_discovered": 3, 00:12:23.504 "num_base_bdevs_operational": 3, 00:12:23.504 "base_bdevs_list": [ 00:12:23.504 { 00:12:23.505 "name": "spare", 00:12:23.505 "uuid": "004e514b-9e0b-50f3-9a17-8659d473dd8b", 00:12:23.505 "is_configured": true, 00:12:23.505 "data_offset": 0, 00:12:23.505 "data_size": 65536 00:12:23.505 }, 00:12:23.505 { 00:12:23.505 "name": null, 00:12:23.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.505 "is_configured": false, 00:12:23.505 "data_offset": 0, 00:12:23.505 "data_size": 65536 00:12:23.505 }, 00:12:23.505 { 00:12:23.505 "name": "BaseBdev3", 00:12:23.505 "uuid": "e5a5cd2a-9e96-5c9e-872e-37e0be95a917", 00:12:23.505 "is_configured": true, 00:12:23.505 "data_offset": 0, 00:12:23.505 "data_size": 65536 00:12:23.505 }, 00:12:23.505 { 00:12:23.505 "name": "BaseBdev4", 00:12:23.505 "uuid": "74890a8c-528a-59b0-a877-cd404a759876", 00:12:23.505 "is_configured": true, 00:12:23.505 "data_offset": 0, 00:12:23.505 "data_size": 65536 00:12:23.505 } 00:12:23.505 ] 00:12:23.505 }' 00:12:23.505 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.505 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.764 84.38 IOPS, 253.12 MiB/s [2024-11-02T23:52:17.859Z] 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.764 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.764 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.764 [2024-11-02 23:52:17.783292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.764 [2024-11-02 23:52:17.783322] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.764 00:12:23.764 Latency(us) 00:12:23.764 [2024-11-02T23:52:17.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.764 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:23.764 raid_bdev1 : 8.13 83.59 250.77 0.00 0.00 17178.90 291.55 117220.72 00:12:23.764 [2024-11-02T23:52:17.859Z] =================================================================================================================== 00:12:23.764 [2024-11-02T23:52:17.859Z] Total : 83.59 250.77 0.00 0.00 17178.90 291.55 117220.72 00:12:23.764 [2024-11-02 23:52:17.810676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.764 [2024-11-02 23:52:17.810715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.764 [2024-11-02 23:52:17.810839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.764 [2024-11-02 23:52:17.810851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:23.764 { 00:12:23.764 "results": [ 00:12:23.764 { 00:12:23.764 "job": "raid_bdev1", 00:12:23.764 "core_mask": "0x1", 00:12:23.764 "workload": "randrw", 00:12:23.764 "percentage": 50, 00:12:23.764 "status": "finished", 00:12:23.764 "queue_depth": 2, 00:12:23.764 "io_size": 3145728, 00:12:23.764 "runtime": 8.134951, 00:12:23.764 "iops": 83.58993188772742, 00:12:23.764 "mibps": 250.76979566318226, 00:12:23.764 "io_failed": 0, 00:12:23.764 "io_timeout": 0, 00:12:23.764 "avg_latency_us": 17178.89939892114, 00:12:23.764 "min_latency_us": 291.54934497816595, 00:12:23.764 "max_latency_us": 117220.7231441048 00:12:23.764 } 00:12:23.764 ], 00:12:23.764 "core_count": 1 00:12:23.764 } 00:12:23.764 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.764 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.764 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.764 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.764 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:23.764 23:52:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.024 23:52:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:24.024 /dev/nbd0 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.024 1+0 records in 00:12:24.024 1+0 records out 00:12:24.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356846 s, 11.5 MB/s 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:24.024 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:24.284 /dev/nbd1 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.284 1+0 records in 00:12:24.284 1+0 records out 00:12:24.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339553 s, 12.1 MB/s 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.284 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:24.545 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:24.545 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.545 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:24.545 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:24.545 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:24.545 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.545 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.807 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:25.065 /dev/nbd1 00:12:25.065 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.065 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.065 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:25.065 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:12:25.065 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:25.065 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.066 1+0 records in 00:12:25.066 1+0 records out 00:12:25.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449099 s, 9.1 MB/s 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.066 23:52:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.326 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89150 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 89150 ']' 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 89150 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89150 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:25.586 killing process with pid 89150 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89150' 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 89150 00:12:25.586 Received shutdown signal, test time was about 9.805222 seconds 00:12:25.586 00:12:25.586 Latency(us) 00:12:25.586 [2024-11-02T23:52:19.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.586 [2024-11-02T23:52:19.681Z] =================================================================================================================== 00:12:25.586 [2024-11-02T23:52:19.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:25.586 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 89150 00:12:25.586 [2024-11-02 23:52:19.474250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.586 [2024-11-02 23:52:19.520411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:25.845 00:12:25.845 real 0m11.765s 00:12:25.845 user 0m15.294s 00:12:25.845 sys 0m1.773s 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.845 ************************************ 00:12:25.845 END TEST raid_rebuild_test_io 00:12:25.845 ************************************ 00:12:25.845 23:52:19 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:25.845 23:52:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:25.845 23:52:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:25.845 23:52:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.845 ************************************ 00:12:25.845 START TEST raid_rebuild_test_sb_io 00:12:25.845 ************************************ 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89548 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89548 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 89548 ']' 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:25.845 23:52:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.845 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:25.845 Zero copy mechanism will not be used. 00:12:25.845 [2024-11-02 23:52:19.885471] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:12:25.845 [2024-11-02 23:52:19.885611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89548 ] 00:12:26.104 [2024-11-02 23:52:20.042470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.104 [2024-11-02 23:52:20.070143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.104 [2024-11-02 23:52:20.113530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.104 [2024-11-02 23:52:20.113570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.672 BaseBdev1_malloc 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.672 [2024-11-02 23:52:20.727815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:26.672 [2024-11-02 23:52:20.727871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.672 [2024-11-02 23:52:20.727896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:26.672 [2024-11-02 23:52:20.727909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.672 [2024-11-02 23:52:20.729931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.672 [2024-11-02 23:52:20.729965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:26.672 BaseBdev1 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.672 BaseBdev2_malloc 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.672 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.672 [2024-11-02 23:52:20.756562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:26.672 [2024-11-02 23:52:20.756613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.672 [2024-11-02 23:52:20.756649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:26.672 [2024-11-02 23:52:20.756657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.672 [2024-11-02 23:52:20.758826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.673 [2024-11-02 23:52:20.758863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:26.673 BaseBdev2 00:12:26.673 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.673 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:26.673 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:26.673 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.673 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.932 BaseBdev3_malloc 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.932 [2024-11-02 23:52:20.785632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:26.932 [2024-11-02 23:52:20.785696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.932 [2024-11-02 23:52:20.785724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:26.932 [2024-11-02 23:52:20.785733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.932 [2024-11-02 23:52:20.787996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.932 [2024-11-02 23:52:20.788044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:26.932 BaseBdev3 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.932 BaseBdev4_malloc 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.932 [2024-11-02 23:52:20.822413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:26.932 [2024-11-02 23:52:20.822469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.932 [2024-11-02 23:52:20.822491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:26.932 [2024-11-02 23:52:20.822500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.932 [2024-11-02 23:52:20.824642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.932 [2024-11-02 23:52:20.824674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:26.932 BaseBdev4 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.932 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 spare_malloc 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 spare_delay 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 [2024-11-02 23:52:20.863087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:26.933 [2024-11-02 23:52:20.863134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.933 [2024-11-02 23:52:20.863151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:26.933 [2024-11-02 23:52:20.863159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.933 [2024-11-02 23:52:20.865199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.933 [2024-11-02 23:52:20.865239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:26.933 spare 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 [2024-11-02 23:52:20.875153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:26.933 [2024-11-02 23:52:20.877008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.933 [2024-11-02 23:52:20.877070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.933 [2024-11-02 23:52:20.877118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:26.933 [2024-11-02 23:52:20.877282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:26.933 [2024-11-02 23:52:20.877297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.933 [2024-11-02 23:52:20.877550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:26.933 [2024-11-02 23:52:20.877700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:26.933 [2024-11-02 23:52:20.877720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:26.933 [2024-11-02 23:52:20.877875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.933 "name": "raid_bdev1", 00:12:26.933 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:26.933 "strip_size_kb": 0, 00:12:26.933 "state": "online", 00:12:26.933 "raid_level": "raid1", 00:12:26.933 "superblock": true, 00:12:26.933 "num_base_bdevs": 4, 00:12:26.933 "num_base_bdevs_discovered": 4, 00:12:26.933 "num_base_bdevs_operational": 4, 00:12:26.933 "base_bdevs_list": [ 00:12:26.933 { 00:12:26.933 "name": "BaseBdev1", 00:12:26.933 "uuid": "9c2184b6-ed98-5bb0-86fd-646fc080fe60", 00:12:26.933 "is_configured": true, 00:12:26.933 "data_offset": 2048, 00:12:26.933 "data_size": 63488 00:12:26.933 }, 00:12:26.933 { 00:12:26.933 "name": "BaseBdev2", 00:12:26.933 "uuid": "6ea2735c-278e-5675-bd16-70c0e9b4183f", 00:12:26.933 "is_configured": true, 00:12:26.933 "data_offset": 2048, 00:12:26.933 "data_size": 63488 00:12:26.933 }, 00:12:26.933 { 00:12:26.933 "name": "BaseBdev3", 00:12:26.933 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:26.933 "is_configured": true, 00:12:26.933 "data_offset": 2048, 00:12:26.933 "data_size": 63488 00:12:26.933 }, 00:12:26.933 { 00:12:26.933 "name": "BaseBdev4", 00:12:26.933 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:26.933 "is_configured": true, 00:12:26.933 "data_offset": 2048, 00:12:26.933 "data_size": 63488 00:12:26.933 } 00:12:26.933 ] 00:12:26.933 }' 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.933 23:52:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:27.502 [2024-11-02 23:52:21.338959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.502 [2024-11-02 23:52:21.426348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.502 "name": "raid_bdev1", 00:12:27.502 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:27.502 "strip_size_kb": 0, 00:12:27.502 "state": "online", 00:12:27.502 "raid_level": "raid1", 00:12:27.502 "superblock": true, 00:12:27.502 "num_base_bdevs": 4, 00:12:27.502 "num_base_bdevs_discovered": 3, 00:12:27.502 "num_base_bdevs_operational": 3, 00:12:27.502 "base_bdevs_list": [ 00:12:27.502 { 00:12:27.502 "name": null, 00:12:27.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.502 "is_configured": false, 00:12:27.502 "data_offset": 0, 00:12:27.502 "data_size": 63488 00:12:27.502 }, 00:12:27.502 { 00:12:27.502 "name": "BaseBdev2", 00:12:27.502 "uuid": "6ea2735c-278e-5675-bd16-70c0e9b4183f", 00:12:27.502 "is_configured": true, 00:12:27.502 "data_offset": 2048, 00:12:27.502 "data_size": 63488 00:12:27.502 }, 00:12:27.502 { 00:12:27.502 "name": "BaseBdev3", 00:12:27.502 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:27.502 "is_configured": true, 00:12:27.502 "data_offset": 2048, 00:12:27.502 "data_size": 63488 00:12:27.502 }, 00:12:27.502 { 00:12:27.502 "name": "BaseBdev4", 00:12:27.502 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:27.502 "is_configured": true, 00:12:27.502 "data_offset": 2048, 00:12:27.502 "data_size": 63488 00:12:27.502 } 00:12:27.502 ] 00:12:27.502 }' 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.502 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.502 [2024-11-02 23:52:21.516222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:27.502 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:27.502 Zero copy mechanism will not be used. 00:12:27.502 Running I/O for 60 seconds... 00:12:28.071 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:28.071 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.071 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.071 [2024-11-02 23:52:21.882936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.071 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.071 23:52:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:28.071 [2024-11-02 23:52:21.928396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:28.072 [2024-11-02 23:52:21.930579] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:28.072 [2024-11-02 23:52:22.055353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:28.331 [2024-11-02 23:52:22.293458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:28.331 [2024-11-02 23:52:22.294183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:28.590 195.00 IOPS, 585.00 MiB/s [2024-11-02T23:52:22.685Z] [2024-11-02 23:52:22.645040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:28.590 [2024-11-02 23:52:22.646164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:28.850 [2024-11-02 23:52:22.886098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:28.850 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.850 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.850 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.850 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.850 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.850 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.850 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.850 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.850 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.850 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.111 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.111 "name": "raid_bdev1", 00:12:29.111 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:29.111 "strip_size_kb": 0, 00:12:29.111 "state": "online", 00:12:29.111 "raid_level": "raid1", 00:12:29.111 "superblock": true, 00:12:29.111 "num_base_bdevs": 4, 00:12:29.111 "num_base_bdevs_discovered": 4, 00:12:29.111 "num_base_bdevs_operational": 4, 00:12:29.111 "process": { 00:12:29.111 "type": "rebuild", 00:12:29.111 "target": "spare", 00:12:29.111 "progress": { 00:12:29.111 "blocks": 10240, 00:12:29.111 "percent": 16 00:12:29.111 } 00:12:29.111 }, 00:12:29.111 "base_bdevs_list": [ 00:12:29.111 { 00:12:29.111 "name": "spare", 00:12:29.111 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:29.111 "is_configured": true, 00:12:29.111 "data_offset": 2048, 00:12:29.111 "data_size": 63488 00:12:29.111 }, 00:12:29.111 { 00:12:29.111 "name": "BaseBdev2", 00:12:29.111 "uuid": "6ea2735c-278e-5675-bd16-70c0e9b4183f", 00:12:29.111 "is_configured": true, 00:12:29.111 "data_offset": 2048, 00:12:29.111 "data_size": 63488 00:12:29.111 }, 00:12:29.111 { 00:12:29.111 "name": "BaseBdev3", 00:12:29.111 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:29.111 "is_configured": true, 00:12:29.111 "data_offset": 2048, 00:12:29.111 "data_size": 63488 00:12:29.111 }, 00:12:29.111 { 00:12:29.111 "name": "BaseBdev4", 00:12:29.111 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:29.111 "is_configured": true, 00:12:29.111 "data_offset": 2048, 00:12:29.111 "data_size": 63488 00:12:29.111 } 00:12:29.111 ] 00:12:29.111 }' 00:12:29.111 23:52:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.111 [2024-11-02 23:52:23.062944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.111 [2024-11-02 23:52:23.156314] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:29.111 [2024-11-02 23:52:23.159575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.111 [2024-11-02 23:52:23.159626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.111 [2024-11-02 23:52:23.159639] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:29.111 [2024-11-02 23:52:23.177152] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.111 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.371 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.371 "name": "raid_bdev1", 00:12:29.371 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:29.371 "strip_size_kb": 0, 00:12:29.371 "state": "online", 00:12:29.371 "raid_level": "raid1", 00:12:29.371 "superblock": true, 00:12:29.371 "num_base_bdevs": 4, 00:12:29.371 "num_base_bdevs_discovered": 3, 00:12:29.371 "num_base_bdevs_operational": 3, 00:12:29.371 "base_bdevs_list": [ 00:12:29.371 { 00:12:29.371 "name": null, 00:12:29.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.371 "is_configured": false, 00:12:29.371 "data_offset": 0, 00:12:29.371 "data_size": 63488 00:12:29.371 }, 00:12:29.371 { 00:12:29.371 "name": "BaseBdev2", 00:12:29.371 "uuid": "6ea2735c-278e-5675-bd16-70c0e9b4183f", 00:12:29.371 "is_configured": true, 00:12:29.371 "data_offset": 2048, 00:12:29.371 "data_size": 63488 00:12:29.371 }, 00:12:29.371 { 00:12:29.371 "name": "BaseBdev3", 00:12:29.371 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:29.371 "is_configured": true, 00:12:29.371 "data_offset": 2048, 00:12:29.371 "data_size": 63488 00:12:29.371 }, 00:12:29.371 { 00:12:29.371 "name": "BaseBdev4", 00:12:29.371 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:29.371 "is_configured": true, 00:12:29.371 "data_offset": 2048, 00:12:29.371 "data_size": 63488 00:12:29.371 } 00:12:29.371 ] 00:12:29.371 }' 00:12:29.371 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.371 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.631 192.50 IOPS, 577.50 MiB/s [2024-11-02T23:52:23.726Z] 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.631 "name": "raid_bdev1", 00:12:29.631 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:29.631 "strip_size_kb": 0, 00:12:29.631 "state": "online", 00:12:29.631 "raid_level": "raid1", 00:12:29.631 "superblock": true, 00:12:29.631 "num_base_bdevs": 4, 00:12:29.631 "num_base_bdevs_discovered": 3, 00:12:29.631 "num_base_bdevs_operational": 3, 00:12:29.631 "base_bdevs_list": [ 00:12:29.631 { 00:12:29.631 "name": null, 00:12:29.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.631 "is_configured": false, 00:12:29.631 "data_offset": 0, 00:12:29.631 "data_size": 63488 00:12:29.631 }, 00:12:29.631 { 00:12:29.631 "name": "BaseBdev2", 00:12:29.631 "uuid": "6ea2735c-278e-5675-bd16-70c0e9b4183f", 00:12:29.631 "is_configured": true, 00:12:29.631 "data_offset": 2048, 00:12:29.631 "data_size": 63488 00:12:29.631 }, 00:12:29.631 { 00:12:29.631 "name": "BaseBdev3", 00:12:29.631 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:29.631 "is_configured": true, 00:12:29.631 "data_offset": 2048, 00:12:29.631 "data_size": 63488 00:12:29.631 }, 00:12:29.631 { 00:12:29.631 "name": "BaseBdev4", 00:12:29.631 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:29.631 "is_configured": true, 00:12:29.631 "data_offset": 2048, 00:12:29.631 "data_size": 63488 00:12:29.631 } 00:12:29.631 ] 00:12:29.631 }' 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.631 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.891 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.891 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:29.891 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.891 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.891 [2024-11-02 23:52:23.737828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.891 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.891 23:52:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:29.891 [2024-11-02 23:52:23.780982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:29.891 [2024-11-02 23:52:23.782946] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:29.891 [2024-11-02 23:52:23.889110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:29.891 [2024-11-02 23:52:23.889531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:30.150 [2024-11-02 23:52:24.099860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:30.151 [2024-11-02 23:52:24.100544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:30.409 [2024-11-02 23:52:24.443103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:30.409 [2024-11-02 23:52:24.443636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:30.932 177.67 IOPS, 533.00 MiB/s [2024-11-02T23:52:25.027Z] 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.932 "name": "raid_bdev1", 00:12:30.932 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:30.932 "strip_size_kb": 0, 00:12:30.932 "state": "online", 00:12:30.932 "raid_level": "raid1", 00:12:30.932 "superblock": true, 00:12:30.932 "num_base_bdevs": 4, 00:12:30.932 "num_base_bdevs_discovered": 4, 00:12:30.932 "num_base_bdevs_operational": 4, 00:12:30.932 "process": { 00:12:30.932 "type": "rebuild", 00:12:30.932 "target": "spare", 00:12:30.932 "progress": { 00:12:30.932 "blocks": 12288, 00:12:30.932 "percent": 19 00:12:30.932 } 00:12:30.932 }, 00:12:30.932 "base_bdevs_list": [ 00:12:30.932 { 00:12:30.932 "name": "spare", 00:12:30.932 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:30.932 "is_configured": true, 00:12:30.932 "data_offset": 2048, 00:12:30.932 "data_size": 63488 00:12:30.932 }, 00:12:30.932 { 00:12:30.932 "name": "BaseBdev2", 00:12:30.932 "uuid": "6ea2735c-278e-5675-bd16-70c0e9b4183f", 00:12:30.932 "is_configured": true, 00:12:30.932 "data_offset": 2048, 00:12:30.932 "data_size": 63488 00:12:30.932 }, 00:12:30.932 { 00:12:30.932 "name": "BaseBdev3", 00:12:30.932 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:30.932 "is_configured": true, 00:12:30.932 "data_offset": 2048, 00:12:30.932 "data_size": 63488 00:12:30.932 }, 00:12:30.932 { 00:12:30.932 "name": "BaseBdev4", 00:12:30.932 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:30.932 "is_configured": true, 00:12:30.932 "data_offset": 2048, 00:12:30.932 "data_size": 63488 00:12:30.932 } 00:12:30.932 ] 00:12:30.932 }' 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.932 [2024-11-02 23:52:24.840818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:30.932 [2024-11-02 23:52:24.841270] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:30.932 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.932 23:52:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.932 [2024-11-02 23:52:24.929080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:31.193 [2024-11-02 23:52:25.130127] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:31.193 [2024-11-02 23:52:25.130170] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.193 "name": "raid_bdev1", 00:12:31.193 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:31.193 "strip_size_kb": 0, 00:12:31.193 "state": "online", 00:12:31.193 "raid_level": "raid1", 00:12:31.193 "superblock": true, 00:12:31.193 "num_base_bdevs": 4, 00:12:31.193 "num_base_bdevs_discovered": 3, 00:12:31.193 "num_base_bdevs_operational": 3, 00:12:31.193 "process": { 00:12:31.193 "type": "rebuild", 00:12:31.193 "target": "spare", 00:12:31.193 "progress": { 00:12:31.193 "blocks": 16384, 00:12:31.193 "percent": 25 00:12:31.193 } 00:12:31.193 }, 00:12:31.193 "base_bdevs_list": [ 00:12:31.193 { 00:12:31.193 "name": "spare", 00:12:31.193 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:31.193 "is_configured": true, 00:12:31.193 "data_offset": 2048, 00:12:31.193 "data_size": 63488 00:12:31.193 }, 00:12:31.193 { 00:12:31.193 "name": null, 00:12:31.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.193 "is_configured": false, 00:12:31.193 "data_offset": 0, 00:12:31.193 "data_size": 63488 00:12:31.193 }, 00:12:31.193 { 00:12:31.193 "name": "BaseBdev3", 00:12:31.193 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:31.193 "is_configured": true, 00:12:31.193 "data_offset": 2048, 00:12:31.193 "data_size": 63488 00:12:31.193 }, 00:12:31.193 { 00:12:31.193 "name": "BaseBdev4", 00:12:31.193 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:31.193 "is_configured": true, 00:12:31.193 "data_offset": 2048, 00:12:31.193 "data_size": 63488 00:12:31.193 } 00:12:31.193 ] 00:12:31.193 }' 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=404 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.193 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.454 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.454 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.454 "name": "raid_bdev1", 00:12:31.454 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:31.454 "strip_size_kb": 0, 00:12:31.454 "state": "online", 00:12:31.454 "raid_level": "raid1", 00:12:31.454 "superblock": true, 00:12:31.454 "num_base_bdevs": 4, 00:12:31.454 "num_base_bdevs_discovered": 3, 00:12:31.454 "num_base_bdevs_operational": 3, 00:12:31.454 "process": { 00:12:31.454 "type": "rebuild", 00:12:31.454 "target": "spare", 00:12:31.454 "progress": { 00:12:31.454 "blocks": 18432, 00:12:31.454 "percent": 29 00:12:31.454 } 00:12:31.454 }, 00:12:31.454 "base_bdevs_list": [ 00:12:31.454 { 00:12:31.454 "name": "spare", 00:12:31.454 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:31.454 "is_configured": true, 00:12:31.454 "data_offset": 2048, 00:12:31.454 "data_size": 63488 00:12:31.454 }, 00:12:31.454 { 00:12:31.454 "name": null, 00:12:31.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.454 "is_configured": false, 00:12:31.454 "data_offset": 0, 00:12:31.454 "data_size": 63488 00:12:31.454 }, 00:12:31.454 { 00:12:31.454 "name": "BaseBdev3", 00:12:31.454 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:31.454 "is_configured": true, 00:12:31.454 "data_offset": 2048, 00:12:31.454 "data_size": 63488 00:12:31.454 }, 00:12:31.454 { 00:12:31.454 "name": "BaseBdev4", 00:12:31.454 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:31.454 "is_configured": true, 00:12:31.454 "data_offset": 2048, 00:12:31.454 "data_size": 63488 00:12:31.454 } 00:12:31.454 ] 00:12:31.454 }' 00:12:31.454 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.454 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.454 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.454 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.454 23:52:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:32.022 154.75 IOPS, 464.25 MiB/s [2024-11-02T23:52:26.117Z] [2024-11-02 23:52:26.000405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:32.281 [2024-11-02 23:52:26.209884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.540 "name": "raid_bdev1", 00:12:32.540 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:32.540 "strip_size_kb": 0, 00:12:32.540 "state": "online", 00:12:32.540 "raid_level": "raid1", 00:12:32.540 "superblock": true, 00:12:32.540 "num_base_bdevs": 4, 00:12:32.540 "num_base_bdevs_discovered": 3, 00:12:32.540 "num_base_bdevs_operational": 3, 00:12:32.540 "process": { 00:12:32.540 "type": "rebuild", 00:12:32.540 "target": "spare", 00:12:32.540 "progress": { 00:12:32.540 "blocks": 36864, 00:12:32.540 "percent": 58 00:12:32.540 } 00:12:32.540 }, 00:12:32.540 "base_bdevs_list": [ 00:12:32.540 { 00:12:32.540 "name": "spare", 00:12:32.540 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:32.540 "is_configured": true, 00:12:32.540 "data_offset": 2048, 00:12:32.540 "data_size": 63488 00:12:32.540 }, 00:12:32.540 { 00:12:32.540 "name": null, 00:12:32.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.540 "is_configured": false, 00:12:32.540 "data_offset": 0, 00:12:32.540 "data_size": 63488 00:12:32.540 }, 00:12:32.540 { 00:12:32.540 "name": "BaseBdev3", 00:12:32.540 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:32.540 "is_configured": true, 00:12:32.540 "data_offset": 2048, 00:12:32.540 "data_size": 63488 00:12:32.540 }, 00:12:32.540 { 00:12:32.540 "name": "BaseBdev4", 00:12:32.540 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:32.540 "is_configured": true, 00:12:32.540 "data_offset": 2048, 00:12:32.540 "data_size": 63488 00:12:32.540 } 00:12:32.540 ] 00:12:32.540 }' 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.540 134.00 IOPS, 402.00 MiB/s [2024-11-02T23:52:26.635Z] [2024-11-02 23:52:26.520022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.540 23:52:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:32.799 [2024-11-02 23:52:26.733137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:32.799 [2024-11-02 23:52:26.733548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:33.769 118.50 IOPS, 355.50 MiB/s [2024-11-02T23:52:27.864Z] 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.769 "name": "raid_bdev1", 00:12:33.769 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:33.769 "strip_size_kb": 0, 00:12:33.769 "state": "online", 00:12:33.769 "raid_level": "raid1", 00:12:33.769 "superblock": true, 00:12:33.769 "num_base_bdevs": 4, 00:12:33.769 "num_base_bdevs_discovered": 3, 00:12:33.769 "num_base_bdevs_operational": 3, 00:12:33.769 "process": { 00:12:33.769 "type": "rebuild", 00:12:33.769 "target": "spare", 00:12:33.769 "progress": { 00:12:33.769 "blocks": 53248, 00:12:33.769 "percent": 83 00:12:33.769 } 00:12:33.769 }, 00:12:33.769 "base_bdevs_list": [ 00:12:33.769 { 00:12:33.769 "name": "spare", 00:12:33.769 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:33.769 "is_configured": true, 00:12:33.769 "data_offset": 2048, 00:12:33.769 "data_size": 63488 00:12:33.769 }, 00:12:33.769 { 00:12:33.769 "name": null, 00:12:33.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.769 "is_configured": false, 00:12:33.769 "data_offset": 0, 00:12:33.769 "data_size": 63488 00:12:33.769 }, 00:12:33.769 { 00:12:33.769 "name": "BaseBdev3", 00:12:33.769 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:33.769 "is_configured": true, 00:12:33.769 "data_offset": 2048, 00:12:33.769 "data_size": 63488 00:12:33.769 }, 00:12:33.769 { 00:12:33.769 "name": "BaseBdev4", 00:12:33.769 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:33.769 "is_configured": true, 00:12:33.769 "data_offset": 2048, 00:12:33.769 "data_size": 63488 00:12:33.769 } 00:12:33.769 ] 00:12:33.769 }' 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.769 23:52:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:34.029 [2024-11-02 23:52:28.031009] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:34.289 [2024-11-02 23:52:28.136157] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:34.289 [2024-11-02 23:52:28.138678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.808 106.57 IOPS, 319.71 MiB/s [2024-11-02T23:52:28.903Z] 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.808 "name": "raid_bdev1", 00:12:34.808 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:34.808 "strip_size_kb": 0, 00:12:34.808 "state": "online", 00:12:34.808 "raid_level": "raid1", 00:12:34.808 "superblock": true, 00:12:34.808 "num_base_bdevs": 4, 00:12:34.808 "num_base_bdevs_discovered": 3, 00:12:34.808 "num_base_bdevs_operational": 3, 00:12:34.808 "base_bdevs_list": [ 00:12:34.808 { 00:12:34.808 "name": "spare", 00:12:34.808 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:34.808 "is_configured": true, 00:12:34.808 "data_offset": 2048, 00:12:34.808 "data_size": 63488 00:12:34.808 }, 00:12:34.808 { 00:12:34.808 "name": null, 00:12:34.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.808 "is_configured": false, 00:12:34.808 "data_offset": 0, 00:12:34.808 "data_size": 63488 00:12:34.808 }, 00:12:34.808 { 00:12:34.808 "name": "BaseBdev3", 00:12:34.808 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:34.808 "is_configured": true, 00:12:34.808 "data_offset": 2048, 00:12:34.808 "data_size": 63488 00:12:34.808 }, 00:12:34.808 { 00:12:34.808 "name": "BaseBdev4", 00:12:34.808 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:34.808 "is_configured": true, 00:12:34.808 "data_offset": 2048, 00:12:34.808 "data_size": 63488 00:12:34.808 } 00:12:34.808 ] 00:12:34.808 }' 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.808 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.067 "name": "raid_bdev1", 00:12:35.067 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:35.067 "strip_size_kb": 0, 00:12:35.067 "state": "online", 00:12:35.067 "raid_level": "raid1", 00:12:35.067 "superblock": true, 00:12:35.067 "num_base_bdevs": 4, 00:12:35.067 "num_base_bdevs_discovered": 3, 00:12:35.067 "num_base_bdevs_operational": 3, 00:12:35.067 "base_bdevs_list": [ 00:12:35.067 { 00:12:35.067 "name": "spare", 00:12:35.067 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:35.067 "is_configured": true, 00:12:35.067 "data_offset": 2048, 00:12:35.067 "data_size": 63488 00:12:35.067 }, 00:12:35.067 { 00:12:35.067 "name": null, 00:12:35.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.067 "is_configured": false, 00:12:35.067 "data_offset": 0, 00:12:35.067 "data_size": 63488 00:12:35.067 }, 00:12:35.067 { 00:12:35.067 "name": "BaseBdev3", 00:12:35.067 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:35.067 "is_configured": true, 00:12:35.067 "data_offset": 2048, 00:12:35.067 "data_size": 63488 00:12:35.067 }, 00:12:35.067 { 00:12:35.067 "name": "BaseBdev4", 00:12:35.067 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:35.067 "is_configured": true, 00:12:35.067 "data_offset": 2048, 00:12:35.067 "data_size": 63488 00:12:35.067 } 00:12:35.067 ] 00:12:35.067 }' 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.067 23:52:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.067 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.067 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.067 "name": "raid_bdev1", 00:12:35.067 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:35.067 "strip_size_kb": 0, 00:12:35.067 "state": "online", 00:12:35.067 "raid_level": "raid1", 00:12:35.067 "superblock": true, 00:12:35.067 "num_base_bdevs": 4, 00:12:35.067 "num_base_bdevs_discovered": 3, 00:12:35.067 "num_base_bdevs_operational": 3, 00:12:35.067 "base_bdevs_list": [ 00:12:35.067 { 00:12:35.067 "name": "spare", 00:12:35.067 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:35.067 "is_configured": true, 00:12:35.067 "data_offset": 2048, 00:12:35.067 "data_size": 63488 00:12:35.067 }, 00:12:35.067 { 00:12:35.067 "name": null, 00:12:35.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.067 "is_configured": false, 00:12:35.067 "data_offset": 0, 00:12:35.067 "data_size": 63488 00:12:35.067 }, 00:12:35.067 { 00:12:35.067 "name": "BaseBdev3", 00:12:35.067 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:35.067 "is_configured": true, 00:12:35.067 "data_offset": 2048, 00:12:35.067 "data_size": 63488 00:12:35.067 }, 00:12:35.067 { 00:12:35.067 "name": "BaseBdev4", 00:12:35.067 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:35.067 "is_configured": true, 00:12:35.067 "data_offset": 2048, 00:12:35.067 "data_size": 63488 00:12:35.067 } 00:12:35.067 ] 00:12:35.067 }' 00:12:35.067 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.067 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.327 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.327 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.327 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.327 [2024-11-02 23:52:29.397086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.327 [2024-11-02 23:52:29.397122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.586 00:12:35.586 Latency(us) 00:12:35.586 [2024-11-02T23:52:29.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.586 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:35.586 raid_bdev1 : 7.93 97.99 293.96 0.00 0.00 14427.16 293.34 116762.83 00:12:35.586 [2024-11-02T23:52:29.681Z] =================================================================================================================== 00:12:35.586 [2024-11-02T23:52:29.681Z] Total : 97.99 293.96 0.00 0.00 14427.16 293.34 116762.83 00:12:35.586 [2024-11-02 23:52:29.436503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.586 [2024-11-02 23:52:29.436551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.586 [2024-11-02 23:52:29.436649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.586 [2024-11-02 23:52:29.436661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:35.586 { 00:12:35.586 "results": [ 00:12:35.586 { 00:12:35.586 "job": "raid_bdev1", 00:12:35.586 "core_mask": "0x1", 00:12:35.586 "workload": "randrw", 00:12:35.586 "percentage": 50, 00:12:35.586 "status": "finished", 00:12:35.586 "queue_depth": 2, 00:12:35.586 "io_size": 3145728, 00:12:35.586 "runtime": 7.929698, 00:12:35.586 "iops": 97.98607714947026, 00:12:35.586 "mibps": 293.9582314484108, 00:12:35.586 "io_failed": 0, 00:12:35.586 "io_timeout": 0, 00:12:35.586 "avg_latency_us": 14427.156504976592, 00:12:35.586 "min_latency_us": 293.3379912663755, 00:12:35.586 "max_latency_us": 116762.82969432314 00:12:35.586 } 00:12:35.586 ], 00:12:35.586 "core_count": 1 00:12:35.587 } 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:35.587 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:35.845 /dev/nbd0 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.845 1+0 records in 00:12:35.845 1+0 records out 00:12:35.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004321 s, 9.5 MB/s 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:35.845 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:36.136 /dev/nbd1 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:36.136 23:52:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.136 1+0 records in 00:12:36.136 1+0 records out 00:12:36.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487118 s, 8.4 MB/s 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.136 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.395 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:36.654 /dev/nbd1 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.654 1+0 records in 00:12:36.654 1+0 records out 00:12:36.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361423 s, 11.3 MB/s 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:36.654 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.655 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:36.655 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.655 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.914 23:52:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.174 [2024-11-02 23:52:31.095728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:37.174 [2024-11-02 23:52:31.095789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.174 [2024-11-02 23:52:31.095808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:37.174 [2024-11-02 23:52:31.095819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.174 [2024-11-02 23:52:31.097950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.174 [2024-11-02 23:52:31.097990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:37.174 [2024-11-02 23:52:31.098070] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:37.174 [2024-11-02 23:52:31.098106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.174 [2024-11-02 23:52:31.098242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.174 [2024-11-02 23:52:31.098342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:37.174 spare 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.174 [2024-11-02 23:52:31.198244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:37.174 [2024-11-02 23:52:31.198279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:37.174 [2024-11-02 23:52:31.198551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:12:37.174 [2024-11-02 23:52:31.198706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:37.174 [2024-11-02 23:52:31.198724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:37.174 [2024-11-02 23:52:31.198878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.174 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.175 "name": "raid_bdev1", 00:12:37.175 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:37.175 "strip_size_kb": 0, 00:12:37.175 "state": "online", 00:12:37.175 "raid_level": "raid1", 00:12:37.175 "superblock": true, 00:12:37.175 "num_base_bdevs": 4, 00:12:37.175 "num_base_bdevs_discovered": 3, 00:12:37.175 "num_base_bdevs_operational": 3, 00:12:37.175 "base_bdevs_list": [ 00:12:37.175 { 00:12:37.175 "name": "spare", 00:12:37.175 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:37.175 "is_configured": true, 00:12:37.175 "data_offset": 2048, 00:12:37.175 "data_size": 63488 00:12:37.175 }, 00:12:37.175 { 00:12:37.175 "name": null, 00:12:37.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.175 "is_configured": false, 00:12:37.175 "data_offset": 2048, 00:12:37.175 "data_size": 63488 00:12:37.175 }, 00:12:37.175 { 00:12:37.175 "name": "BaseBdev3", 00:12:37.175 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:37.175 "is_configured": true, 00:12:37.175 "data_offset": 2048, 00:12:37.175 "data_size": 63488 00:12:37.175 }, 00:12:37.175 { 00:12:37.175 "name": "BaseBdev4", 00:12:37.175 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:37.175 "is_configured": true, 00:12:37.175 "data_offset": 2048, 00:12:37.175 "data_size": 63488 00:12:37.175 } 00:12:37.175 ] 00:12:37.175 }' 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.175 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.743 "name": "raid_bdev1", 00:12:37.743 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:37.743 "strip_size_kb": 0, 00:12:37.743 "state": "online", 00:12:37.743 "raid_level": "raid1", 00:12:37.743 "superblock": true, 00:12:37.743 "num_base_bdevs": 4, 00:12:37.743 "num_base_bdevs_discovered": 3, 00:12:37.743 "num_base_bdevs_operational": 3, 00:12:37.743 "base_bdevs_list": [ 00:12:37.743 { 00:12:37.743 "name": "spare", 00:12:37.743 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:37.743 "is_configured": true, 00:12:37.743 "data_offset": 2048, 00:12:37.743 "data_size": 63488 00:12:37.743 }, 00:12:37.743 { 00:12:37.743 "name": null, 00:12:37.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.743 "is_configured": false, 00:12:37.743 "data_offset": 2048, 00:12:37.743 "data_size": 63488 00:12:37.743 }, 00:12:37.743 { 00:12:37.743 "name": "BaseBdev3", 00:12:37.743 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:37.743 "is_configured": true, 00:12:37.743 "data_offset": 2048, 00:12:37.743 "data_size": 63488 00:12:37.743 }, 00:12:37.743 { 00:12:37.743 "name": "BaseBdev4", 00:12:37.743 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:37.743 "is_configured": true, 00:12:37.743 "data_offset": 2048, 00:12:37.743 "data_size": 63488 00:12:37.743 } 00:12:37.743 ] 00:12:37.743 }' 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.743 [2024-11-02 23:52:31.806628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.743 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.003 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.003 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.003 "name": "raid_bdev1", 00:12:38.003 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:38.003 "strip_size_kb": 0, 00:12:38.003 "state": "online", 00:12:38.003 "raid_level": "raid1", 00:12:38.003 "superblock": true, 00:12:38.003 "num_base_bdevs": 4, 00:12:38.003 "num_base_bdevs_discovered": 2, 00:12:38.003 "num_base_bdevs_operational": 2, 00:12:38.003 "base_bdevs_list": [ 00:12:38.003 { 00:12:38.003 "name": null, 00:12:38.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.003 "is_configured": false, 00:12:38.003 "data_offset": 0, 00:12:38.003 "data_size": 63488 00:12:38.003 }, 00:12:38.003 { 00:12:38.003 "name": null, 00:12:38.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.003 "is_configured": false, 00:12:38.003 "data_offset": 2048, 00:12:38.003 "data_size": 63488 00:12:38.003 }, 00:12:38.003 { 00:12:38.003 "name": "BaseBdev3", 00:12:38.003 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:38.003 "is_configured": true, 00:12:38.003 "data_offset": 2048, 00:12:38.003 "data_size": 63488 00:12:38.003 }, 00:12:38.003 { 00:12:38.003 "name": "BaseBdev4", 00:12:38.003 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:38.003 "is_configured": true, 00:12:38.003 "data_offset": 2048, 00:12:38.003 "data_size": 63488 00:12:38.003 } 00:12:38.003 ] 00:12:38.003 }' 00:12:38.003 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.003 23:52:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.262 23:52:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:38.263 23:52:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.263 23:52:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.263 [2024-11-02 23:52:32.261953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.263 [2024-11-02 23:52:32.262140] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:38.263 [2024-11-02 23:52:32.262155] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:38.263 [2024-11-02 23:52:32.262198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.263 [2024-11-02 23:52:32.266562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:12:38.263 23:52:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.263 23:52:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:38.263 [2024-11-02 23:52:32.268492] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.212 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.212 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.212 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.212 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.212 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.212 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.212 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.212 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.212 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.212 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.472 "name": "raid_bdev1", 00:12:39.472 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:39.472 "strip_size_kb": 0, 00:12:39.472 "state": "online", 00:12:39.472 "raid_level": "raid1", 00:12:39.472 "superblock": true, 00:12:39.472 "num_base_bdevs": 4, 00:12:39.472 "num_base_bdevs_discovered": 3, 00:12:39.472 "num_base_bdevs_operational": 3, 00:12:39.472 "process": { 00:12:39.472 "type": "rebuild", 00:12:39.472 "target": "spare", 00:12:39.472 "progress": { 00:12:39.472 "blocks": 20480, 00:12:39.472 "percent": 32 00:12:39.472 } 00:12:39.472 }, 00:12:39.472 "base_bdevs_list": [ 00:12:39.472 { 00:12:39.472 "name": "spare", 00:12:39.472 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:39.472 "is_configured": true, 00:12:39.472 "data_offset": 2048, 00:12:39.472 "data_size": 63488 00:12:39.472 }, 00:12:39.472 { 00:12:39.472 "name": null, 00:12:39.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.472 "is_configured": false, 00:12:39.472 "data_offset": 2048, 00:12:39.472 "data_size": 63488 00:12:39.472 }, 00:12:39.472 { 00:12:39.472 "name": "BaseBdev3", 00:12:39.472 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:39.472 "is_configured": true, 00:12:39.472 "data_offset": 2048, 00:12:39.472 "data_size": 63488 00:12:39.472 }, 00:12:39.472 { 00:12:39.472 "name": "BaseBdev4", 00:12:39.472 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:39.472 "is_configured": true, 00:12:39.472 "data_offset": 2048, 00:12:39.472 "data_size": 63488 00:12:39.472 } 00:12:39.472 ] 00:12:39.472 }' 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.472 [2024-11-02 23:52:33.432976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:39.472 [2024-11-02 23:52:33.472815] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:39.472 [2024-11-02 23:52:33.472886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.472 [2024-11-02 23:52:33.472901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:39.472 [2024-11-02 23:52:33.472910] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.472 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.472 "name": "raid_bdev1", 00:12:39.472 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:39.472 "strip_size_kb": 0, 00:12:39.472 "state": "online", 00:12:39.472 "raid_level": "raid1", 00:12:39.472 "superblock": true, 00:12:39.472 "num_base_bdevs": 4, 00:12:39.472 "num_base_bdevs_discovered": 2, 00:12:39.472 "num_base_bdevs_operational": 2, 00:12:39.472 "base_bdevs_list": [ 00:12:39.472 { 00:12:39.472 "name": null, 00:12:39.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.473 "is_configured": false, 00:12:39.473 "data_offset": 0, 00:12:39.473 "data_size": 63488 00:12:39.473 }, 00:12:39.473 { 00:12:39.473 "name": null, 00:12:39.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.473 "is_configured": false, 00:12:39.473 "data_offset": 2048, 00:12:39.473 "data_size": 63488 00:12:39.473 }, 00:12:39.473 { 00:12:39.473 "name": "BaseBdev3", 00:12:39.473 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:39.473 "is_configured": true, 00:12:39.473 "data_offset": 2048, 00:12:39.473 "data_size": 63488 00:12:39.473 }, 00:12:39.473 { 00:12:39.473 "name": "BaseBdev4", 00:12:39.473 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:39.473 "is_configured": true, 00:12:39.473 "data_offset": 2048, 00:12:39.473 "data_size": 63488 00:12:39.473 } 00:12:39.473 ] 00:12:39.473 }' 00:12:39.473 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.473 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.041 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:40.041 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.041 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.041 [2024-11-02 23:52:33.992514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:40.041 [2024-11-02 23:52:33.992584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.041 [2024-11-02 23:52:33.992608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:40.041 [2024-11-02 23:52:33.992619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.041 [2024-11-02 23:52:33.993044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.041 [2024-11-02 23:52:33.993073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:40.041 [2024-11-02 23:52:33.993166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:40.041 [2024-11-02 23:52:33.993184] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:40.041 [2024-11-02 23:52:33.993194] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:40.041 [2024-11-02 23:52:33.993225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.041 [2024-11-02 23:52:33.997631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:12:40.041 spare 00:12:40.041 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.041 23:52:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:40.041 [2024-11-02 23:52:33.999516] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.979 "name": "raid_bdev1", 00:12:40.979 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:40.979 "strip_size_kb": 0, 00:12:40.979 "state": "online", 00:12:40.979 "raid_level": "raid1", 00:12:40.979 "superblock": true, 00:12:40.979 "num_base_bdevs": 4, 00:12:40.979 "num_base_bdevs_discovered": 3, 00:12:40.979 "num_base_bdevs_operational": 3, 00:12:40.979 "process": { 00:12:40.979 "type": "rebuild", 00:12:40.979 "target": "spare", 00:12:40.979 "progress": { 00:12:40.979 "blocks": 20480, 00:12:40.979 "percent": 32 00:12:40.979 } 00:12:40.979 }, 00:12:40.979 "base_bdevs_list": [ 00:12:40.979 { 00:12:40.979 "name": "spare", 00:12:40.979 "uuid": "2bb6dc27-7822-5916-90c4-e2c3b60ffe9d", 00:12:40.979 "is_configured": true, 00:12:40.979 "data_offset": 2048, 00:12:40.979 "data_size": 63488 00:12:40.979 }, 00:12:40.979 { 00:12:40.979 "name": null, 00:12:40.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.979 "is_configured": false, 00:12:40.979 "data_offset": 2048, 00:12:40.979 "data_size": 63488 00:12:40.979 }, 00:12:40.979 { 00:12:40.979 "name": "BaseBdev3", 00:12:40.979 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:40.979 "is_configured": true, 00:12:40.979 "data_offset": 2048, 00:12:40.979 "data_size": 63488 00:12:40.979 }, 00:12:40.979 { 00:12:40.979 "name": "BaseBdev4", 00:12:40.979 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:40.979 "is_configured": true, 00:12:40.979 "data_offset": 2048, 00:12:40.979 "data_size": 63488 00:12:40.979 } 00:12:40.979 ] 00:12:40.979 }' 00:12:40.979 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.238 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.238 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.238 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.238 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:41.238 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.238 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.238 [2024-11-02 23:52:35.155736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:41.238 [2024-11-02 23:52:35.203591] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:41.238 [2024-11-02 23:52:35.203644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.238 [2024-11-02 23:52:35.203662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:41.239 [2024-11-02 23:52:35.203669] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.239 "name": "raid_bdev1", 00:12:41.239 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:41.239 "strip_size_kb": 0, 00:12:41.239 "state": "online", 00:12:41.239 "raid_level": "raid1", 00:12:41.239 "superblock": true, 00:12:41.239 "num_base_bdevs": 4, 00:12:41.239 "num_base_bdevs_discovered": 2, 00:12:41.239 "num_base_bdevs_operational": 2, 00:12:41.239 "base_bdevs_list": [ 00:12:41.239 { 00:12:41.239 "name": null, 00:12:41.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.239 "is_configured": false, 00:12:41.239 "data_offset": 0, 00:12:41.239 "data_size": 63488 00:12:41.239 }, 00:12:41.239 { 00:12:41.239 "name": null, 00:12:41.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.239 "is_configured": false, 00:12:41.239 "data_offset": 2048, 00:12:41.239 "data_size": 63488 00:12:41.239 }, 00:12:41.239 { 00:12:41.239 "name": "BaseBdev3", 00:12:41.239 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:41.239 "is_configured": true, 00:12:41.239 "data_offset": 2048, 00:12:41.239 "data_size": 63488 00:12:41.239 }, 00:12:41.239 { 00:12:41.239 "name": "BaseBdev4", 00:12:41.239 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:41.239 "is_configured": true, 00:12:41.239 "data_offset": 2048, 00:12:41.239 "data_size": 63488 00:12:41.239 } 00:12:41.239 ] 00:12:41.239 }' 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.239 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.808 "name": "raid_bdev1", 00:12:41.808 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:41.808 "strip_size_kb": 0, 00:12:41.808 "state": "online", 00:12:41.808 "raid_level": "raid1", 00:12:41.808 "superblock": true, 00:12:41.808 "num_base_bdevs": 4, 00:12:41.808 "num_base_bdevs_discovered": 2, 00:12:41.808 "num_base_bdevs_operational": 2, 00:12:41.808 "base_bdevs_list": [ 00:12:41.808 { 00:12:41.808 "name": null, 00:12:41.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.808 "is_configured": false, 00:12:41.808 "data_offset": 0, 00:12:41.808 "data_size": 63488 00:12:41.808 }, 00:12:41.808 { 00:12:41.808 "name": null, 00:12:41.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.808 "is_configured": false, 00:12:41.808 "data_offset": 2048, 00:12:41.808 "data_size": 63488 00:12:41.808 }, 00:12:41.808 { 00:12:41.808 "name": "BaseBdev3", 00:12:41.808 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:41.808 "is_configured": true, 00:12:41.808 "data_offset": 2048, 00:12:41.808 "data_size": 63488 00:12:41.808 }, 00:12:41.808 { 00:12:41.808 "name": "BaseBdev4", 00:12:41.808 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:41.808 "is_configured": true, 00:12:41.808 "data_offset": 2048, 00:12:41.808 "data_size": 63488 00:12:41.808 } 00:12:41.808 ] 00:12:41.808 }' 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.808 [2024-11-02 23:52:35.803223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:41.808 [2024-11-02 23:52:35.803300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.808 [2024-11-02 23:52:35.803323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:41.808 [2024-11-02 23:52:35.803333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.808 [2024-11-02 23:52:35.803768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.808 [2024-11-02 23:52:35.803815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.808 [2024-11-02 23:52:35.803905] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:41.808 [2024-11-02 23:52:35.803927] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:41.808 [2024-11-02 23:52:35.803938] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:41.808 [2024-11-02 23:52:35.803947] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:41.808 BaseBdev1 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.808 23:52:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:42.746 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:42.746 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.746 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.746 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.747 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.747 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.747 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.747 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.747 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.747 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.747 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.747 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.747 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.747 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.006 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.006 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.006 "name": "raid_bdev1", 00:12:43.006 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:43.006 "strip_size_kb": 0, 00:12:43.006 "state": "online", 00:12:43.006 "raid_level": "raid1", 00:12:43.006 "superblock": true, 00:12:43.006 "num_base_bdevs": 4, 00:12:43.006 "num_base_bdevs_discovered": 2, 00:12:43.006 "num_base_bdevs_operational": 2, 00:12:43.006 "base_bdevs_list": [ 00:12:43.006 { 00:12:43.006 "name": null, 00:12:43.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.006 "is_configured": false, 00:12:43.006 "data_offset": 0, 00:12:43.006 "data_size": 63488 00:12:43.006 }, 00:12:43.006 { 00:12:43.006 "name": null, 00:12:43.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.006 "is_configured": false, 00:12:43.006 "data_offset": 2048, 00:12:43.006 "data_size": 63488 00:12:43.006 }, 00:12:43.006 { 00:12:43.006 "name": "BaseBdev3", 00:12:43.006 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:43.006 "is_configured": true, 00:12:43.006 "data_offset": 2048, 00:12:43.006 "data_size": 63488 00:12:43.006 }, 00:12:43.006 { 00:12:43.006 "name": "BaseBdev4", 00:12:43.006 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:43.006 "is_configured": true, 00:12:43.006 "data_offset": 2048, 00:12:43.006 "data_size": 63488 00:12:43.006 } 00:12:43.006 ] 00:12:43.006 }' 00:12:43.006 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.006 23:52:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.266 "name": "raid_bdev1", 00:12:43.266 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:43.266 "strip_size_kb": 0, 00:12:43.266 "state": "online", 00:12:43.266 "raid_level": "raid1", 00:12:43.266 "superblock": true, 00:12:43.266 "num_base_bdevs": 4, 00:12:43.266 "num_base_bdevs_discovered": 2, 00:12:43.266 "num_base_bdevs_operational": 2, 00:12:43.266 "base_bdevs_list": [ 00:12:43.266 { 00:12:43.266 "name": null, 00:12:43.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.266 "is_configured": false, 00:12:43.266 "data_offset": 0, 00:12:43.266 "data_size": 63488 00:12:43.266 }, 00:12:43.266 { 00:12:43.266 "name": null, 00:12:43.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.266 "is_configured": false, 00:12:43.266 "data_offset": 2048, 00:12:43.266 "data_size": 63488 00:12:43.266 }, 00:12:43.266 { 00:12:43.266 "name": "BaseBdev3", 00:12:43.266 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:43.266 "is_configured": true, 00:12:43.266 "data_offset": 2048, 00:12:43.266 "data_size": 63488 00:12:43.266 }, 00:12:43.266 { 00:12:43.266 "name": "BaseBdev4", 00:12:43.266 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:43.266 "is_configured": true, 00:12:43.266 "data_offset": 2048, 00:12:43.266 "data_size": 63488 00:12:43.266 } 00:12:43.266 ] 00:12:43.266 }' 00:12:43.266 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.526 [2024-11-02 23:52:37.420872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.526 [2024-11-02 23:52:37.421033] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:43.526 [2024-11-02 23:52:37.421048] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:43.526 request: 00:12:43.526 { 00:12:43.526 "base_bdev": "BaseBdev1", 00:12:43.526 "raid_bdev": "raid_bdev1", 00:12:43.526 "method": "bdev_raid_add_base_bdev", 00:12:43.526 "req_id": 1 00:12:43.526 } 00:12:43.526 Got JSON-RPC error response 00:12:43.526 response: 00:12:43.526 { 00:12:43.526 "code": -22, 00:12:43.526 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:43.526 } 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.526 23:52:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.464 "name": "raid_bdev1", 00:12:44.464 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:44.464 "strip_size_kb": 0, 00:12:44.464 "state": "online", 00:12:44.464 "raid_level": "raid1", 00:12:44.464 "superblock": true, 00:12:44.464 "num_base_bdevs": 4, 00:12:44.464 "num_base_bdevs_discovered": 2, 00:12:44.464 "num_base_bdevs_operational": 2, 00:12:44.464 "base_bdevs_list": [ 00:12:44.464 { 00:12:44.464 "name": null, 00:12:44.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.464 "is_configured": false, 00:12:44.464 "data_offset": 0, 00:12:44.464 "data_size": 63488 00:12:44.464 }, 00:12:44.464 { 00:12:44.464 "name": null, 00:12:44.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.464 "is_configured": false, 00:12:44.464 "data_offset": 2048, 00:12:44.464 "data_size": 63488 00:12:44.464 }, 00:12:44.464 { 00:12:44.464 "name": "BaseBdev3", 00:12:44.464 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:44.464 "is_configured": true, 00:12:44.464 "data_offset": 2048, 00:12:44.464 "data_size": 63488 00:12:44.464 }, 00:12:44.464 { 00:12:44.464 "name": "BaseBdev4", 00:12:44.464 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:44.464 "is_configured": true, 00:12:44.464 "data_offset": 2048, 00:12:44.464 "data_size": 63488 00:12:44.464 } 00:12:44.464 ] 00:12:44.464 }' 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.464 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.032 "name": "raid_bdev1", 00:12:45.032 "uuid": "5be36905-0a96-400b-93cb-adc652dcce9c", 00:12:45.032 "strip_size_kb": 0, 00:12:45.032 "state": "online", 00:12:45.032 "raid_level": "raid1", 00:12:45.032 "superblock": true, 00:12:45.032 "num_base_bdevs": 4, 00:12:45.032 "num_base_bdevs_discovered": 2, 00:12:45.032 "num_base_bdevs_operational": 2, 00:12:45.032 "base_bdevs_list": [ 00:12:45.032 { 00:12:45.032 "name": null, 00:12:45.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.032 "is_configured": false, 00:12:45.032 "data_offset": 0, 00:12:45.032 "data_size": 63488 00:12:45.032 }, 00:12:45.032 { 00:12:45.032 "name": null, 00:12:45.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.032 "is_configured": false, 00:12:45.032 "data_offset": 2048, 00:12:45.032 "data_size": 63488 00:12:45.032 }, 00:12:45.032 { 00:12:45.032 "name": "BaseBdev3", 00:12:45.032 "uuid": "663abaf8-a834-5cf0-b63e-b1dcc40ec3dd", 00:12:45.032 "is_configured": true, 00:12:45.032 "data_offset": 2048, 00:12:45.032 "data_size": 63488 00:12:45.032 }, 00:12:45.032 { 00:12:45.032 "name": "BaseBdev4", 00:12:45.032 "uuid": "fee7843b-fd32-5c30-8f5e-ae4cdd8de9d2", 00:12:45.032 "is_configured": true, 00:12:45.032 "data_offset": 2048, 00:12:45.032 "data_size": 63488 00:12:45.032 } 00:12:45.032 ] 00:12:45.032 }' 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.032 23:52:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89548 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 89548 ']' 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 89548 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89548 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:45.032 killing process with pid 89548 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89548' 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 89548 00:12:45.032 Received shutdown signal, test time was about 17.588910 seconds 00:12:45.032 00:12:45.032 Latency(us) 00:12:45.032 [2024-11-02T23:52:39.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.032 [2024-11-02T23:52:39.127Z] =================================================================================================================== 00:12:45.032 [2024-11-02T23:52:39.127Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:45.032 [2024-11-02 23:52:39.073562] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.032 [2024-11-02 23:52:39.073686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.032 [2024-11-02 23:52:39.073782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.032 [2024-11-02 23:52:39.073803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:45.032 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 89548 00:12:45.032 [2024-11-02 23:52:39.119221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.292 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:45.292 00:12:45.292 real 0m19.533s 00:12:45.292 user 0m26.062s 00:12:45.292 sys 0m2.674s 00:12:45.292 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:45.292 23:52:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.292 ************************************ 00:12:45.292 END TEST raid_rebuild_test_sb_io 00:12:45.292 ************************************ 00:12:45.292 23:52:39 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:12:45.292 23:52:39 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:12:45.292 23:52:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:45.292 23:52:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:45.292 23:52:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.552 ************************************ 00:12:45.552 START TEST raid5f_state_function_test 00:12:45.552 ************************************ 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90253 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90253' 00:12:45.552 Process raid pid: 90253 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90253 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 90253 ']' 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:45.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:45.552 23:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.552 [2024-11-02 23:52:39.487049] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:12:45.552 [2024-11-02 23:52:39.487172] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.552 [2024-11-02 23:52:39.639827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.811 [2024-11-02 23:52:39.665833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.811 [2024-11-02 23:52:39.708466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.811 [2024-11-02 23:52:39.708509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.392 [2024-11-02 23:52:40.313128] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.392 [2024-11-02 23:52:40.313228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.392 [2024-11-02 23:52:40.313246] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.392 [2024-11-02 23:52:40.313256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.392 [2024-11-02 23:52:40.313263] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.392 [2024-11-02 23:52:40.313273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.392 "name": "Existed_Raid", 00:12:46.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.392 "strip_size_kb": 64, 00:12:46.392 "state": "configuring", 00:12:46.392 "raid_level": "raid5f", 00:12:46.392 "superblock": false, 00:12:46.392 "num_base_bdevs": 3, 00:12:46.392 "num_base_bdevs_discovered": 0, 00:12:46.392 "num_base_bdevs_operational": 3, 00:12:46.392 "base_bdevs_list": [ 00:12:46.392 { 00:12:46.392 "name": "BaseBdev1", 00:12:46.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.392 "is_configured": false, 00:12:46.392 "data_offset": 0, 00:12:46.392 "data_size": 0 00:12:46.392 }, 00:12:46.392 { 00:12:46.392 "name": "BaseBdev2", 00:12:46.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.392 "is_configured": false, 00:12:46.392 "data_offset": 0, 00:12:46.392 "data_size": 0 00:12:46.392 }, 00:12:46.392 { 00:12:46.392 "name": "BaseBdev3", 00:12:46.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.392 "is_configured": false, 00:12:46.392 "data_offset": 0, 00:12:46.392 "data_size": 0 00:12:46.392 } 00:12:46.392 ] 00:12:46.392 }' 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.392 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.965 [2024-11-02 23:52:40.756293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:46.965 [2024-11-02 23:52:40.756338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.965 [2024-11-02 23:52:40.768284] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.965 [2024-11-02 23:52:40.768323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.965 [2024-11-02 23:52:40.768331] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.965 [2024-11-02 23:52:40.768339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.965 [2024-11-02 23:52:40.768345] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.965 [2024-11-02 23:52:40.768353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.965 [2024-11-02 23:52:40.788821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.965 BaseBdev1 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.965 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.966 [ 00:12:46.966 { 00:12:46.966 "name": "BaseBdev1", 00:12:46.966 "aliases": [ 00:12:46.966 "c8a18a12-427e-4dc7-af59-42b951fc6b80" 00:12:46.966 ], 00:12:46.966 "product_name": "Malloc disk", 00:12:46.966 "block_size": 512, 00:12:46.966 "num_blocks": 65536, 00:12:46.966 "uuid": "c8a18a12-427e-4dc7-af59-42b951fc6b80", 00:12:46.966 "assigned_rate_limits": { 00:12:46.966 "rw_ios_per_sec": 0, 00:12:46.966 "rw_mbytes_per_sec": 0, 00:12:46.966 "r_mbytes_per_sec": 0, 00:12:46.966 "w_mbytes_per_sec": 0 00:12:46.966 }, 00:12:46.966 "claimed": true, 00:12:46.966 "claim_type": "exclusive_write", 00:12:46.966 "zoned": false, 00:12:46.966 "supported_io_types": { 00:12:46.966 "read": true, 00:12:46.966 "write": true, 00:12:46.966 "unmap": true, 00:12:46.966 "flush": true, 00:12:46.966 "reset": true, 00:12:46.966 "nvme_admin": false, 00:12:46.966 "nvme_io": false, 00:12:46.966 "nvme_io_md": false, 00:12:46.966 "write_zeroes": true, 00:12:46.966 "zcopy": true, 00:12:46.966 "get_zone_info": false, 00:12:46.966 "zone_management": false, 00:12:46.966 "zone_append": false, 00:12:46.966 "compare": false, 00:12:46.966 "compare_and_write": false, 00:12:46.966 "abort": true, 00:12:46.966 "seek_hole": false, 00:12:46.966 "seek_data": false, 00:12:46.966 "copy": true, 00:12:46.966 "nvme_iov_md": false 00:12:46.966 }, 00:12:46.966 "memory_domains": [ 00:12:46.966 { 00:12:46.966 "dma_device_id": "system", 00:12:46.966 "dma_device_type": 1 00:12:46.966 }, 00:12:46.966 { 00:12:46.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.966 "dma_device_type": 2 00:12:46.966 } 00:12:46.966 ], 00:12:46.966 "driver_specific": {} 00:12:46.966 } 00:12:46.966 ] 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.966 "name": "Existed_Raid", 00:12:46.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.966 "strip_size_kb": 64, 00:12:46.966 "state": "configuring", 00:12:46.966 "raid_level": "raid5f", 00:12:46.966 "superblock": false, 00:12:46.966 "num_base_bdevs": 3, 00:12:46.966 "num_base_bdevs_discovered": 1, 00:12:46.966 "num_base_bdevs_operational": 3, 00:12:46.966 "base_bdevs_list": [ 00:12:46.966 { 00:12:46.966 "name": "BaseBdev1", 00:12:46.966 "uuid": "c8a18a12-427e-4dc7-af59-42b951fc6b80", 00:12:46.966 "is_configured": true, 00:12:46.966 "data_offset": 0, 00:12:46.966 "data_size": 65536 00:12:46.966 }, 00:12:46.966 { 00:12:46.966 "name": "BaseBdev2", 00:12:46.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.966 "is_configured": false, 00:12:46.966 "data_offset": 0, 00:12:46.966 "data_size": 0 00:12:46.966 }, 00:12:46.966 { 00:12:46.966 "name": "BaseBdev3", 00:12:46.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.966 "is_configured": false, 00:12:46.966 "data_offset": 0, 00:12:46.966 "data_size": 0 00:12:46.966 } 00:12:46.966 ] 00:12:46.966 }' 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.966 23:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 [2024-11-02 23:52:41.268004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.225 [2024-11-02 23:52:41.268054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 [2024-11-02 23:52:41.280013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.225 [2024-11-02 23:52:41.281806] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.225 [2024-11-02 23:52:41.281843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.225 [2024-11-02 23:52:41.281852] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.225 [2024-11-02 23:52:41.281878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:47.225 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.226 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.485 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.485 "name": "Existed_Raid", 00:12:47.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.485 "strip_size_kb": 64, 00:12:47.485 "state": "configuring", 00:12:47.485 "raid_level": "raid5f", 00:12:47.485 "superblock": false, 00:12:47.485 "num_base_bdevs": 3, 00:12:47.485 "num_base_bdevs_discovered": 1, 00:12:47.485 "num_base_bdevs_operational": 3, 00:12:47.485 "base_bdevs_list": [ 00:12:47.485 { 00:12:47.485 "name": "BaseBdev1", 00:12:47.485 "uuid": "c8a18a12-427e-4dc7-af59-42b951fc6b80", 00:12:47.485 "is_configured": true, 00:12:47.485 "data_offset": 0, 00:12:47.485 "data_size": 65536 00:12:47.485 }, 00:12:47.485 { 00:12:47.485 "name": "BaseBdev2", 00:12:47.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.485 "is_configured": false, 00:12:47.485 "data_offset": 0, 00:12:47.485 "data_size": 0 00:12:47.485 }, 00:12:47.485 { 00:12:47.485 "name": "BaseBdev3", 00:12:47.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.485 "is_configured": false, 00:12:47.485 "data_offset": 0, 00:12:47.485 "data_size": 0 00:12:47.485 } 00:12:47.485 ] 00:12:47.485 }' 00:12:47.485 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.485 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.745 [2024-11-02 23:52:41.742233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.745 BaseBdev2 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.745 [ 00:12:47.745 { 00:12:47.745 "name": "BaseBdev2", 00:12:47.745 "aliases": [ 00:12:47.745 "4f9a5d2b-726b-4a55-9ee0-f959972eab98" 00:12:47.745 ], 00:12:47.745 "product_name": "Malloc disk", 00:12:47.745 "block_size": 512, 00:12:47.745 "num_blocks": 65536, 00:12:47.745 "uuid": "4f9a5d2b-726b-4a55-9ee0-f959972eab98", 00:12:47.745 "assigned_rate_limits": { 00:12:47.745 "rw_ios_per_sec": 0, 00:12:47.745 "rw_mbytes_per_sec": 0, 00:12:47.745 "r_mbytes_per_sec": 0, 00:12:47.745 "w_mbytes_per_sec": 0 00:12:47.745 }, 00:12:47.745 "claimed": true, 00:12:47.745 "claim_type": "exclusive_write", 00:12:47.745 "zoned": false, 00:12:47.745 "supported_io_types": { 00:12:47.745 "read": true, 00:12:47.745 "write": true, 00:12:47.745 "unmap": true, 00:12:47.745 "flush": true, 00:12:47.745 "reset": true, 00:12:47.745 "nvme_admin": false, 00:12:47.745 "nvme_io": false, 00:12:47.745 "nvme_io_md": false, 00:12:47.745 "write_zeroes": true, 00:12:47.745 "zcopy": true, 00:12:47.745 "get_zone_info": false, 00:12:47.745 "zone_management": false, 00:12:47.745 "zone_append": false, 00:12:47.745 "compare": false, 00:12:47.745 "compare_and_write": false, 00:12:47.745 "abort": true, 00:12:47.745 "seek_hole": false, 00:12:47.745 "seek_data": false, 00:12:47.745 "copy": true, 00:12:47.745 "nvme_iov_md": false 00:12:47.745 }, 00:12:47.745 "memory_domains": [ 00:12:47.745 { 00:12:47.745 "dma_device_id": "system", 00:12:47.745 "dma_device_type": 1 00:12:47.745 }, 00:12:47.745 { 00:12:47.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.745 "dma_device_type": 2 00:12:47.745 } 00:12:47.745 ], 00:12:47.745 "driver_specific": {} 00:12:47.745 } 00:12:47.745 ] 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.745 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.746 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.746 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.746 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.746 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.746 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.746 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.746 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.746 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.746 "name": "Existed_Raid", 00:12:47.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.746 "strip_size_kb": 64, 00:12:47.746 "state": "configuring", 00:12:47.746 "raid_level": "raid5f", 00:12:47.746 "superblock": false, 00:12:47.746 "num_base_bdevs": 3, 00:12:47.746 "num_base_bdevs_discovered": 2, 00:12:47.746 "num_base_bdevs_operational": 3, 00:12:47.746 "base_bdevs_list": [ 00:12:47.746 { 00:12:47.746 "name": "BaseBdev1", 00:12:47.746 "uuid": "c8a18a12-427e-4dc7-af59-42b951fc6b80", 00:12:47.746 "is_configured": true, 00:12:47.746 "data_offset": 0, 00:12:47.746 "data_size": 65536 00:12:47.746 }, 00:12:47.746 { 00:12:47.746 "name": "BaseBdev2", 00:12:47.746 "uuid": "4f9a5d2b-726b-4a55-9ee0-f959972eab98", 00:12:47.746 "is_configured": true, 00:12:47.746 "data_offset": 0, 00:12:47.746 "data_size": 65536 00:12:47.746 }, 00:12:47.746 { 00:12:47.746 "name": "BaseBdev3", 00:12:47.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.746 "is_configured": false, 00:12:47.746 "data_offset": 0, 00:12:47.746 "data_size": 0 00:12:47.746 } 00:12:47.746 ] 00:12:47.746 }' 00:12:47.746 23:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.746 23:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.315 [2024-11-02 23:52:42.276208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.315 [2024-11-02 23:52:42.276272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:48.315 [2024-11-02 23:52:42.276286] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:48.315 [2024-11-02 23:52:42.276648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:48.315 [2024-11-02 23:52:42.277221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:48.315 [2024-11-02 23:52:42.277245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:48.315 [2024-11-02 23:52:42.277480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.315 BaseBdev3 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.315 [ 00:12:48.315 { 00:12:48.315 "name": "BaseBdev3", 00:12:48.315 "aliases": [ 00:12:48.315 "d695f8e5-6eab-42ce-9dfd-f1dab61cfc54" 00:12:48.315 ], 00:12:48.315 "product_name": "Malloc disk", 00:12:48.315 "block_size": 512, 00:12:48.315 "num_blocks": 65536, 00:12:48.315 "uuid": "d695f8e5-6eab-42ce-9dfd-f1dab61cfc54", 00:12:48.315 "assigned_rate_limits": { 00:12:48.315 "rw_ios_per_sec": 0, 00:12:48.315 "rw_mbytes_per_sec": 0, 00:12:48.315 "r_mbytes_per_sec": 0, 00:12:48.315 "w_mbytes_per_sec": 0 00:12:48.315 }, 00:12:48.315 "claimed": true, 00:12:48.315 "claim_type": "exclusive_write", 00:12:48.315 "zoned": false, 00:12:48.315 "supported_io_types": { 00:12:48.315 "read": true, 00:12:48.315 "write": true, 00:12:48.315 "unmap": true, 00:12:48.315 "flush": true, 00:12:48.315 "reset": true, 00:12:48.315 "nvme_admin": false, 00:12:48.315 "nvme_io": false, 00:12:48.315 "nvme_io_md": false, 00:12:48.315 "write_zeroes": true, 00:12:48.315 "zcopy": true, 00:12:48.315 "get_zone_info": false, 00:12:48.315 "zone_management": false, 00:12:48.315 "zone_append": false, 00:12:48.315 "compare": false, 00:12:48.315 "compare_and_write": false, 00:12:48.315 "abort": true, 00:12:48.315 "seek_hole": false, 00:12:48.315 "seek_data": false, 00:12:48.315 "copy": true, 00:12:48.315 "nvme_iov_md": false 00:12:48.315 }, 00:12:48.315 "memory_domains": [ 00:12:48.315 { 00:12:48.315 "dma_device_id": "system", 00:12:48.315 "dma_device_type": 1 00:12:48.315 }, 00:12:48.315 { 00:12:48.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.315 "dma_device_type": 2 00:12:48.315 } 00:12:48.315 ], 00:12:48.315 "driver_specific": {} 00:12:48.315 } 00:12:48.315 ] 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.315 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.316 "name": "Existed_Raid", 00:12:48.316 "uuid": "659699b8-2738-456b-ad16-40f953225a7f", 00:12:48.316 "strip_size_kb": 64, 00:12:48.316 "state": "online", 00:12:48.316 "raid_level": "raid5f", 00:12:48.316 "superblock": false, 00:12:48.316 "num_base_bdevs": 3, 00:12:48.316 "num_base_bdevs_discovered": 3, 00:12:48.316 "num_base_bdevs_operational": 3, 00:12:48.316 "base_bdevs_list": [ 00:12:48.316 { 00:12:48.316 "name": "BaseBdev1", 00:12:48.316 "uuid": "c8a18a12-427e-4dc7-af59-42b951fc6b80", 00:12:48.316 "is_configured": true, 00:12:48.316 "data_offset": 0, 00:12:48.316 "data_size": 65536 00:12:48.316 }, 00:12:48.316 { 00:12:48.316 "name": "BaseBdev2", 00:12:48.316 "uuid": "4f9a5d2b-726b-4a55-9ee0-f959972eab98", 00:12:48.316 "is_configured": true, 00:12:48.316 "data_offset": 0, 00:12:48.316 "data_size": 65536 00:12:48.316 }, 00:12:48.316 { 00:12:48.316 "name": "BaseBdev3", 00:12:48.316 "uuid": "d695f8e5-6eab-42ce-9dfd-f1dab61cfc54", 00:12:48.316 "is_configured": true, 00:12:48.316 "data_offset": 0, 00:12:48.316 "data_size": 65536 00:12:48.316 } 00:12:48.316 ] 00:12:48.316 }' 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.316 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.886 [2024-11-02 23:52:42.827483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:48.886 "name": "Existed_Raid", 00:12:48.886 "aliases": [ 00:12:48.886 "659699b8-2738-456b-ad16-40f953225a7f" 00:12:48.886 ], 00:12:48.886 "product_name": "Raid Volume", 00:12:48.886 "block_size": 512, 00:12:48.886 "num_blocks": 131072, 00:12:48.886 "uuid": "659699b8-2738-456b-ad16-40f953225a7f", 00:12:48.886 "assigned_rate_limits": { 00:12:48.886 "rw_ios_per_sec": 0, 00:12:48.886 "rw_mbytes_per_sec": 0, 00:12:48.886 "r_mbytes_per_sec": 0, 00:12:48.886 "w_mbytes_per_sec": 0 00:12:48.886 }, 00:12:48.886 "claimed": false, 00:12:48.886 "zoned": false, 00:12:48.886 "supported_io_types": { 00:12:48.886 "read": true, 00:12:48.886 "write": true, 00:12:48.886 "unmap": false, 00:12:48.886 "flush": false, 00:12:48.886 "reset": true, 00:12:48.886 "nvme_admin": false, 00:12:48.886 "nvme_io": false, 00:12:48.886 "nvme_io_md": false, 00:12:48.886 "write_zeroes": true, 00:12:48.886 "zcopy": false, 00:12:48.886 "get_zone_info": false, 00:12:48.886 "zone_management": false, 00:12:48.886 "zone_append": false, 00:12:48.886 "compare": false, 00:12:48.886 "compare_and_write": false, 00:12:48.886 "abort": false, 00:12:48.886 "seek_hole": false, 00:12:48.886 "seek_data": false, 00:12:48.886 "copy": false, 00:12:48.886 "nvme_iov_md": false 00:12:48.886 }, 00:12:48.886 "driver_specific": { 00:12:48.886 "raid": { 00:12:48.886 "uuid": "659699b8-2738-456b-ad16-40f953225a7f", 00:12:48.886 "strip_size_kb": 64, 00:12:48.886 "state": "online", 00:12:48.886 "raid_level": "raid5f", 00:12:48.886 "superblock": false, 00:12:48.886 "num_base_bdevs": 3, 00:12:48.886 "num_base_bdevs_discovered": 3, 00:12:48.886 "num_base_bdevs_operational": 3, 00:12:48.886 "base_bdevs_list": [ 00:12:48.886 { 00:12:48.886 "name": "BaseBdev1", 00:12:48.886 "uuid": "c8a18a12-427e-4dc7-af59-42b951fc6b80", 00:12:48.886 "is_configured": true, 00:12:48.886 "data_offset": 0, 00:12:48.886 "data_size": 65536 00:12:48.886 }, 00:12:48.886 { 00:12:48.886 "name": "BaseBdev2", 00:12:48.886 "uuid": "4f9a5d2b-726b-4a55-9ee0-f959972eab98", 00:12:48.886 "is_configured": true, 00:12:48.886 "data_offset": 0, 00:12:48.886 "data_size": 65536 00:12:48.886 }, 00:12:48.886 { 00:12:48.886 "name": "BaseBdev3", 00:12:48.886 "uuid": "d695f8e5-6eab-42ce-9dfd-f1dab61cfc54", 00:12:48.886 "is_configured": true, 00:12:48.886 "data_offset": 0, 00:12:48.886 "data_size": 65536 00:12:48.886 } 00:12:48.886 ] 00:12:48.886 } 00:12:48.886 } 00:12:48.886 }' 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:48.886 BaseBdev2 00:12:48.886 BaseBdev3' 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.886 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.146 23:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.147 [2024-11-02 23:52:43.122827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.147 "name": "Existed_Raid", 00:12:49.147 "uuid": "659699b8-2738-456b-ad16-40f953225a7f", 00:12:49.147 "strip_size_kb": 64, 00:12:49.147 "state": "online", 00:12:49.147 "raid_level": "raid5f", 00:12:49.147 "superblock": false, 00:12:49.147 "num_base_bdevs": 3, 00:12:49.147 "num_base_bdevs_discovered": 2, 00:12:49.147 "num_base_bdevs_operational": 2, 00:12:49.147 "base_bdevs_list": [ 00:12:49.147 { 00:12:49.147 "name": null, 00:12:49.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.147 "is_configured": false, 00:12:49.147 "data_offset": 0, 00:12:49.147 "data_size": 65536 00:12:49.147 }, 00:12:49.147 { 00:12:49.147 "name": "BaseBdev2", 00:12:49.147 "uuid": "4f9a5d2b-726b-4a55-9ee0-f959972eab98", 00:12:49.147 "is_configured": true, 00:12:49.147 "data_offset": 0, 00:12:49.147 "data_size": 65536 00:12:49.147 }, 00:12:49.147 { 00:12:49.147 "name": "BaseBdev3", 00:12:49.147 "uuid": "d695f8e5-6eab-42ce-9dfd-f1dab61cfc54", 00:12:49.147 "is_configured": true, 00:12:49.147 "data_offset": 0, 00:12:49.147 "data_size": 65536 00:12:49.147 } 00:12:49.147 ] 00:12:49.147 }' 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.147 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.718 [2024-11-02 23:52:43.609191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.718 [2024-11-02 23:52:43.609282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.718 [2024-11-02 23:52:43.620149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.718 [2024-11-02 23:52:43.680055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:49.718 [2024-11-02 23:52:43.680097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.718 BaseBdev2 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:49.718 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.719 [ 00:12:49.719 { 00:12:49.719 "name": "BaseBdev2", 00:12:49.719 "aliases": [ 00:12:49.719 "68e533da-1f73-4a00-b79e-1e90e7eccc69" 00:12:49.719 ], 00:12:49.719 "product_name": "Malloc disk", 00:12:49.719 "block_size": 512, 00:12:49.719 "num_blocks": 65536, 00:12:49.719 "uuid": "68e533da-1f73-4a00-b79e-1e90e7eccc69", 00:12:49.719 "assigned_rate_limits": { 00:12:49.719 "rw_ios_per_sec": 0, 00:12:49.719 "rw_mbytes_per_sec": 0, 00:12:49.719 "r_mbytes_per_sec": 0, 00:12:49.719 "w_mbytes_per_sec": 0 00:12:49.719 }, 00:12:49.719 "claimed": false, 00:12:49.719 "zoned": false, 00:12:49.719 "supported_io_types": { 00:12:49.719 "read": true, 00:12:49.719 "write": true, 00:12:49.719 "unmap": true, 00:12:49.719 "flush": true, 00:12:49.719 "reset": true, 00:12:49.719 "nvme_admin": false, 00:12:49.719 "nvme_io": false, 00:12:49.719 "nvme_io_md": false, 00:12:49.719 "write_zeroes": true, 00:12:49.719 "zcopy": true, 00:12:49.719 "get_zone_info": false, 00:12:49.719 "zone_management": false, 00:12:49.719 "zone_append": false, 00:12:49.719 "compare": false, 00:12:49.719 "compare_and_write": false, 00:12:49.719 "abort": true, 00:12:49.719 "seek_hole": false, 00:12:49.719 "seek_data": false, 00:12:49.719 "copy": true, 00:12:49.719 "nvme_iov_md": false 00:12:49.719 }, 00:12:49.719 "memory_domains": [ 00:12:49.719 { 00:12:49.719 "dma_device_id": "system", 00:12:49.719 "dma_device_type": 1 00:12:49.719 }, 00:12:49.719 { 00:12:49.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.719 "dma_device_type": 2 00:12:49.719 } 00:12:49.719 ], 00:12:49.719 "driver_specific": {} 00:12:49.719 } 00:12:49.719 ] 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.719 BaseBdev3 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.719 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.979 [ 00:12:49.979 { 00:12:49.979 "name": "BaseBdev3", 00:12:49.979 "aliases": [ 00:12:49.979 "c12d612f-2f0b-41bb-aeef-381fa796e2c4" 00:12:49.979 ], 00:12:49.979 "product_name": "Malloc disk", 00:12:49.979 "block_size": 512, 00:12:49.979 "num_blocks": 65536, 00:12:49.979 "uuid": "c12d612f-2f0b-41bb-aeef-381fa796e2c4", 00:12:49.979 "assigned_rate_limits": { 00:12:49.979 "rw_ios_per_sec": 0, 00:12:49.979 "rw_mbytes_per_sec": 0, 00:12:49.979 "r_mbytes_per_sec": 0, 00:12:49.979 "w_mbytes_per_sec": 0 00:12:49.979 }, 00:12:49.979 "claimed": false, 00:12:49.979 "zoned": false, 00:12:49.979 "supported_io_types": { 00:12:49.979 "read": true, 00:12:49.979 "write": true, 00:12:49.979 "unmap": true, 00:12:49.979 "flush": true, 00:12:49.979 "reset": true, 00:12:49.979 "nvme_admin": false, 00:12:49.979 "nvme_io": false, 00:12:49.979 "nvme_io_md": false, 00:12:49.979 "write_zeroes": true, 00:12:49.979 "zcopy": true, 00:12:49.979 "get_zone_info": false, 00:12:49.979 "zone_management": false, 00:12:49.979 "zone_append": false, 00:12:49.979 "compare": false, 00:12:49.979 "compare_and_write": false, 00:12:49.979 "abort": true, 00:12:49.979 "seek_hole": false, 00:12:49.979 "seek_data": false, 00:12:49.979 "copy": true, 00:12:49.979 "nvme_iov_md": false 00:12:49.979 }, 00:12:49.979 "memory_domains": [ 00:12:49.979 { 00:12:49.979 "dma_device_id": "system", 00:12:49.979 "dma_device_type": 1 00:12:49.979 }, 00:12:49.979 { 00:12:49.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.979 "dma_device_type": 2 00:12:49.979 } 00:12:49.979 ], 00:12:49.979 "driver_specific": {} 00:12:49.979 } 00:12:49.979 ] 00:12:49.979 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.979 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:49.979 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.979 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.979 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:49.979 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.979 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.979 [2024-11-02 23:52:43.835154] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.979 [2024-11-02 23:52:43.835197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.979 [2024-11-02 23:52:43.835217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.979 [2024-11-02 23:52:43.836961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.980 "name": "Existed_Raid", 00:12:49.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.980 "strip_size_kb": 64, 00:12:49.980 "state": "configuring", 00:12:49.980 "raid_level": "raid5f", 00:12:49.980 "superblock": false, 00:12:49.980 "num_base_bdevs": 3, 00:12:49.980 "num_base_bdevs_discovered": 2, 00:12:49.980 "num_base_bdevs_operational": 3, 00:12:49.980 "base_bdevs_list": [ 00:12:49.980 { 00:12:49.980 "name": "BaseBdev1", 00:12:49.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.980 "is_configured": false, 00:12:49.980 "data_offset": 0, 00:12:49.980 "data_size": 0 00:12:49.980 }, 00:12:49.980 { 00:12:49.980 "name": "BaseBdev2", 00:12:49.980 "uuid": "68e533da-1f73-4a00-b79e-1e90e7eccc69", 00:12:49.980 "is_configured": true, 00:12:49.980 "data_offset": 0, 00:12:49.980 "data_size": 65536 00:12:49.980 }, 00:12:49.980 { 00:12:49.980 "name": "BaseBdev3", 00:12:49.980 "uuid": "c12d612f-2f0b-41bb-aeef-381fa796e2c4", 00:12:49.980 "is_configured": true, 00:12:49.980 "data_offset": 0, 00:12:49.980 "data_size": 65536 00:12:49.980 } 00:12:49.980 ] 00:12:49.980 }' 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.980 23:52:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.239 [2024-11-02 23:52:44.278459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.239 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.499 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.499 "name": "Existed_Raid", 00:12:50.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.499 "strip_size_kb": 64, 00:12:50.499 "state": "configuring", 00:12:50.499 "raid_level": "raid5f", 00:12:50.499 "superblock": false, 00:12:50.499 "num_base_bdevs": 3, 00:12:50.499 "num_base_bdevs_discovered": 1, 00:12:50.499 "num_base_bdevs_operational": 3, 00:12:50.499 "base_bdevs_list": [ 00:12:50.500 { 00:12:50.500 "name": "BaseBdev1", 00:12:50.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.500 "is_configured": false, 00:12:50.500 "data_offset": 0, 00:12:50.500 "data_size": 0 00:12:50.500 }, 00:12:50.500 { 00:12:50.500 "name": null, 00:12:50.500 "uuid": "68e533da-1f73-4a00-b79e-1e90e7eccc69", 00:12:50.500 "is_configured": false, 00:12:50.500 "data_offset": 0, 00:12:50.500 "data_size": 65536 00:12:50.500 }, 00:12:50.500 { 00:12:50.500 "name": "BaseBdev3", 00:12:50.500 "uuid": "c12d612f-2f0b-41bb-aeef-381fa796e2c4", 00:12:50.500 "is_configured": true, 00:12:50.500 "data_offset": 0, 00:12:50.500 "data_size": 65536 00:12:50.500 } 00:12:50.500 ] 00:12:50.500 }' 00:12:50.500 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.500 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.760 [2024-11-02 23:52:44.760489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.760 BaseBdev1 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.760 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.760 [ 00:12:50.760 { 00:12:50.760 "name": "BaseBdev1", 00:12:50.760 "aliases": [ 00:12:50.760 "7152dffa-cce5-44e8-9d92-c986a24fd895" 00:12:50.760 ], 00:12:50.760 "product_name": "Malloc disk", 00:12:50.760 "block_size": 512, 00:12:50.760 "num_blocks": 65536, 00:12:50.760 "uuid": "7152dffa-cce5-44e8-9d92-c986a24fd895", 00:12:50.760 "assigned_rate_limits": { 00:12:50.760 "rw_ios_per_sec": 0, 00:12:50.760 "rw_mbytes_per_sec": 0, 00:12:50.760 "r_mbytes_per_sec": 0, 00:12:50.760 "w_mbytes_per_sec": 0 00:12:50.760 }, 00:12:50.760 "claimed": true, 00:12:50.761 "claim_type": "exclusive_write", 00:12:50.761 "zoned": false, 00:12:50.761 "supported_io_types": { 00:12:50.761 "read": true, 00:12:50.761 "write": true, 00:12:50.761 "unmap": true, 00:12:50.761 "flush": true, 00:12:50.761 "reset": true, 00:12:50.761 "nvme_admin": false, 00:12:50.761 "nvme_io": false, 00:12:50.761 "nvme_io_md": false, 00:12:50.761 "write_zeroes": true, 00:12:50.761 "zcopy": true, 00:12:50.761 "get_zone_info": false, 00:12:50.761 "zone_management": false, 00:12:50.761 "zone_append": false, 00:12:50.761 "compare": false, 00:12:50.761 "compare_and_write": false, 00:12:50.761 "abort": true, 00:12:50.761 "seek_hole": false, 00:12:50.761 "seek_data": false, 00:12:50.761 "copy": true, 00:12:50.761 "nvme_iov_md": false 00:12:50.761 }, 00:12:50.761 "memory_domains": [ 00:12:50.761 { 00:12:50.761 "dma_device_id": "system", 00:12:50.761 "dma_device_type": 1 00:12:50.761 }, 00:12:50.761 { 00:12:50.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.761 "dma_device_type": 2 00:12:50.761 } 00:12:50.761 ], 00:12:50.761 "driver_specific": {} 00:12:50.761 } 00:12:50.761 ] 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.761 "name": "Existed_Raid", 00:12:50.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.761 "strip_size_kb": 64, 00:12:50.761 "state": "configuring", 00:12:50.761 "raid_level": "raid5f", 00:12:50.761 "superblock": false, 00:12:50.761 "num_base_bdevs": 3, 00:12:50.761 "num_base_bdevs_discovered": 2, 00:12:50.761 "num_base_bdevs_operational": 3, 00:12:50.761 "base_bdevs_list": [ 00:12:50.761 { 00:12:50.761 "name": "BaseBdev1", 00:12:50.761 "uuid": "7152dffa-cce5-44e8-9d92-c986a24fd895", 00:12:50.761 "is_configured": true, 00:12:50.761 "data_offset": 0, 00:12:50.761 "data_size": 65536 00:12:50.761 }, 00:12:50.761 { 00:12:50.761 "name": null, 00:12:50.761 "uuid": "68e533da-1f73-4a00-b79e-1e90e7eccc69", 00:12:50.761 "is_configured": false, 00:12:50.761 "data_offset": 0, 00:12:50.761 "data_size": 65536 00:12:50.761 }, 00:12:50.761 { 00:12:50.761 "name": "BaseBdev3", 00:12:50.761 "uuid": "c12d612f-2f0b-41bb-aeef-381fa796e2c4", 00:12:50.761 "is_configured": true, 00:12:50.761 "data_offset": 0, 00:12:50.761 "data_size": 65536 00:12:50.761 } 00:12:50.761 ] 00:12:50.761 }' 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.761 23:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.331 [2024-11-02 23:52:45.255660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.331 "name": "Existed_Raid", 00:12:51.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.331 "strip_size_kb": 64, 00:12:51.331 "state": "configuring", 00:12:51.331 "raid_level": "raid5f", 00:12:51.331 "superblock": false, 00:12:51.331 "num_base_bdevs": 3, 00:12:51.331 "num_base_bdevs_discovered": 1, 00:12:51.331 "num_base_bdevs_operational": 3, 00:12:51.331 "base_bdevs_list": [ 00:12:51.331 { 00:12:51.331 "name": "BaseBdev1", 00:12:51.331 "uuid": "7152dffa-cce5-44e8-9d92-c986a24fd895", 00:12:51.331 "is_configured": true, 00:12:51.331 "data_offset": 0, 00:12:51.331 "data_size": 65536 00:12:51.331 }, 00:12:51.331 { 00:12:51.331 "name": null, 00:12:51.331 "uuid": "68e533da-1f73-4a00-b79e-1e90e7eccc69", 00:12:51.331 "is_configured": false, 00:12:51.331 "data_offset": 0, 00:12:51.331 "data_size": 65536 00:12:51.331 }, 00:12:51.331 { 00:12:51.331 "name": null, 00:12:51.331 "uuid": "c12d612f-2f0b-41bb-aeef-381fa796e2c4", 00:12:51.331 "is_configured": false, 00:12:51.331 "data_offset": 0, 00:12:51.331 "data_size": 65536 00:12:51.331 } 00:12:51.331 ] 00:12:51.331 }' 00:12:51.331 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.332 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.590 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.590 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.590 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.590 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.850 [2024-11-02 23:52:45.726893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.850 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.851 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.851 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.851 "name": "Existed_Raid", 00:12:51.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.851 "strip_size_kb": 64, 00:12:51.851 "state": "configuring", 00:12:51.851 "raid_level": "raid5f", 00:12:51.851 "superblock": false, 00:12:51.851 "num_base_bdevs": 3, 00:12:51.851 "num_base_bdevs_discovered": 2, 00:12:51.851 "num_base_bdevs_operational": 3, 00:12:51.851 "base_bdevs_list": [ 00:12:51.851 { 00:12:51.851 "name": "BaseBdev1", 00:12:51.851 "uuid": "7152dffa-cce5-44e8-9d92-c986a24fd895", 00:12:51.851 "is_configured": true, 00:12:51.851 "data_offset": 0, 00:12:51.851 "data_size": 65536 00:12:51.851 }, 00:12:51.851 { 00:12:51.851 "name": null, 00:12:51.851 "uuid": "68e533da-1f73-4a00-b79e-1e90e7eccc69", 00:12:51.851 "is_configured": false, 00:12:51.851 "data_offset": 0, 00:12:51.851 "data_size": 65536 00:12:51.851 }, 00:12:51.851 { 00:12:51.851 "name": "BaseBdev3", 00:12:51.851 "uuid": "c12d612f-2f0b-41bb-aeef-381fa796e2c4", 00:12:51.851 "is_configured": true, 00:12:51.851 "data_offset": 0, 00:12:51.851 "data_size": 65536 00:12:51.851 } 00:12:51.851 ] 00:12:51.851 }' 00:12:51.851 23:52:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.851 23:52:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.422 [2024-11-02 23:52:46.261995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.422 "name": "Existed_Raid", 00:12:52.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.422 "strip_size_kb": 64, 00:12:52.422 "state": "configuring", 00:12:52.422 "raid_level": "raid5f", 00:12:52.422 "superblock": false, 00:12:52.422 "num_base_bdevs": 3, 00:12:52.422 "num_base_bdevs_discovered": 1, 00:12:52.422 "num_base_bdevs_operational": 3, 00:12:52.422 "base_bdevs_list": [ 00:12:52.422 { 00:12:52.422 "name": null, 00:12:52.422 "uuid": "7152dffa-cce5-44e8-9d92-c986a24fd895", 00:12:52.422 "is_configured": false, 00:12:52.422 "data_offset": 0, 00:12:52.422 "data_size": 65536 00:12:52.422 }, 00:12:52.422 { 00:12:52.422 "name": null, 00:12:52.422 "uuid": "68e533da-1f73-4a00-b79e-1e90e7eccc69", 00:12:52.422 "is_configured": false, 00:12:52.422 "data_offset": 0, 00:12:52.422 "data_size": 65536 00:12:52.422 }, 00:12:52.422 { 00:12:52.422 "name": "BaseBdev3", 00:12:52.422 "uuid": "c12d612f-2f0b-41bb-aeef-381fa796e2c4", 00:12:52.422 "is_configured": true, 00:12:52.422 "data_offset": 0, 00:12:52.422 "data_size": 65536 00:12:52.422 } 00:12:52.422 ] 00:12:52.422 }' 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.422 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.682 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.682 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.682 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.682 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:52.682 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.682 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.683 [2024-11-02 23:52:46.755498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.683 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.943 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.943 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.943 "name": "Existed_Raid", 00:12:52.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.943 "strip_size_kb": 64, 00:12:52.943 "state": "configuring", 00:12:52.943 "raid_level": "raid5f", 00:12:52.943 "superblock": false, 00:12:52.943 "num_base_bdevs": 3, 00:12:52.943 "num_base_bdevs_discovered": 2, 00:12:52.943 "num_base_bdevs_operational": 3, 00:12:52.943 "base_bdevs_list": [ 00:12:52.944 { 00:12:52.944 "name": null, 00:12:52.944 "uuid": "7152dffa-cce5-44e8-9d92-c986a24fd895", 00:12:52.944 "is_configured": false, 00:12:52.944 "data_offset": 0, 00:12:52.944 "data_size": 65536 00:12:52.944 }, 00:12:52.944 { 00:12:52.944 "name": "BaseBdev2", 00:12:52.944 "uuid": "68e533da-1f73-4a00-b79e-1e90e7eccc69", 00:12:52.944 "is_configured": true, 00:12:52.944 "data_offset": 0, 00:12:52.944 "data_size": 65536 00:12:52.944 }, 00:12:52.944 { 00:12:52.944 "name": "BaseBdev3", 00:12:52.944 "uuid": "c12d612f-2f0b-41bb-aeef-381fa796e2c4", 00:12:52.944 "is_configured": true, 00:12:52.944 "data_offset": 0, 00:12:52.944 "data_size": 65536 00:12:52.944 } 00:12:52.944 ] 00:12:52.944 }' 00:12:52.944 23:52:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.944 23:52:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7152dffa-cce5-44e8-9d92-c986a24fd895 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.204 [2024-11-02 23:52:47.277518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:53.204 [2024-11-02 23:52:47.277567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:53.204 [2024-11-02 23:52:47.277577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:53.204 [2024-11-02 23:52:47.277795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:53.204 [2024-11-02 23:52:47.278177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:53.204 [2024-11-02 23:52:47.278196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:53.204 [2024-11-02 23:52:47.278370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.204 NewBaseBdev 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.204 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.465 [ 00:12:53.465 { 00:12:53.465 "name": "NewBaseBdev", 00:12:53.465 "aliases": [ 00:12:53.465 "7152dffa-cce5-44e8-9d92-c986a24fd895" 00:12:53.465 ], 00:12:53.465 "product_name": "Malloc disk", 00:12:53.465 "block_size": 512, 00:12:53.465 "num_blocks": 65536, 00:12:53.465 "uuid": "7152dffa-cce5-44e8-9d92-c986a24fd895", 00:12:53.465 "assigned_rate_limits": { 00:12:53.465 "rw_ios_per_sec": 0, 00:12:53.465 "rw_mbytes_per_sec": 0, 00:12:53.465 "r_mbytes_per_sec": 0, 00:12:53.465 "w_mbytes_per_sec": 0 00:12:53.465 }, 00:12:53.465 "claimed": true, 00:12:53.465 "claim_type": "exclusive_write", 00:12:53.465 "zoned": false, 00:12:53.465 "supported_io_types": { 00:12:53.465 "read": true, 00:12:53.465 "write": true, 00:12:53.465 "unmap": true, 00:12:53.465 "flush": true, 00:12:53.465 "reset": true, 00:12:53.465 "nvme_admin": false, 00:12:53.465 "nvme_io": false, 00:12:53.465 "nvme_io_md": false, 00:12:53.465 "write_zeroes": true, 00:12:53.465 "zcopy": true, 00:12:53.465 "get_zone_info": false, 00:12:53.465 "zone_management": false, 00:12:53.465 "zone_append": false, 00:12:53.465 "compare": false, 00:12:53.465 "compare_and_write": false, 00:12:53.465 "abort": true, 00:12:53.465 "seek_hole": false, 00:12:53.465 "seek_data": false, 00:12:53.465 "copy": true, 00:12:53.465 "nvme_iov_md": false 00:12:53.465 }, 00:12:53.465 "memory_domains": [ 00:12:53.465 { 00:12:53.465 "dma_device_id": "system", 00:12:53.465 "dma_device_type": 1 00:12:53.465 }, 00:12:53.465 { 00:12:53.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.465 "dma_device_type": 2 00:12:53.465 } 00:12:53.465 ], 00:12:53.465 "driver_specific": {} 00:12:53.465 } 00:12:53.465 ] 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.465 "name": "Existed_Raid", 00:12:53.465 "uuid": "005b3423-580c-4bc6-a314-019af8b3a883", 00:12:53.465 "strip_size_kb": 64, 00:12:53.465 "state": "online", 00:12:53.465 "raid_level": "raid5f", 00:12:53.465 "superblock": false, 00:12:53.465 "num_base_bdevs": 3, 00:12:53.465 "num_base_bdevs_discovered": 3, 00:12:53.465 "num_base_bdevs_operational": 3, 00:12:53.465 "base_bdevs_list": [ 00:12:53.465 { 00:12:53.465 "name": "NewBaseBdev", 00:12:53.465 "uuid": "7152dffa-cce5-44e8-9d92-c986a24fd895", 00:12:53.465 "is_configured": true, 00:12:53.465 "data_offset": 0, 00:12:53.465 "data_size": 65536 00:12:53.465 }, 00:12:53.465 { 00:12:53.465 "name": "BaseBdev2", 00:12:53.465 "uuid": "68e533da-1f73-4a00-b79e-1e90e7eccc69", 00:12:53.465 "is_configured": true, 00:12:53.465 "data_offset": 0, 00:12:53.465 "data_size": 65536 00:12:53.465 }, 00:12:53.465 { 00:12:53.465 "name": "BaseBdev3", 00:12:53.465 "uuid": "c12d612f-2f0b-41bb-aeef-381fa796e2c4", 00:12:53.465 "is_configured": true, 00:12:53.465 "data_offset": 0, 00:12:53.465 "data_size": 65536 00:12:53.465 } 00:12:53.465 ] 00:12:53.465 }' 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.465 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.726 [2024-11-02 23:52:47.768959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:53.726 "name": "Existed_Raid", 00:12:53.726 "aliases": [ 00:12:53.726 "005b3423-580c-4bc6-a314-019af8b3a883" 00:12:53.726 ], 00:12:53.726 "product_name": "Raid Volume", 00:12:53.726 "block_size": 512, 00:12:53.726 "num_blocks": 131072, 00:12:53.726 "uuid": "005b3423-580c-4bc6-a314-019af8b3a883", 00:12:53.726 "assigned_rate_limits": { 00:12:53.726 "rw_ios_per_sec": 0, 00:12:53.726 "rw_mbytes_per_sec": 0, 00:12:53.726 "r_mbytes_per_sec": 0, 00:12:53.726 "w_mbytes_per_sec": 0 00:12:53.726 }, 00:12:53.726 "claimed": false, 00:12:53.726 "zoned": false, 00:12:53.726 "supported_io_types": { 00:12:53.726 "read": true, 00:12:53.726 "write": true, 00:12:53.726 "unmap": false, 00:12:53.726 "flush": false, 00:12:53.726 "reset": true, 00:12:53.726 "nvme_admin": false, 00:12:53.726 "nvme_io": false, 00:12:53.726 "nvme_io_md": false, 00:12:53.726 "write_zeroes": true, 00:12:53.726 "zcopy": false, 00:12:53.726 "get_zone_info": false, 00:12:53.726 "zone_management": false, 00:12:53.726 "zone_append": false, 00:12:53.726 "compare": false, 00:12:53.726 "compare_and_write": false, 00:12:53.726 "abort": false, 00:12:53.726 "seek_hole": false, 00:12:53.726 "seek_data": false, 00:12:53.726 "copy": false, 00:12:53.726 "nvme_iov_md": false 00:12:53.726 }, 00:12:53.726 "driver_specific": { 00:12:53.726 "raid": { 00:12:53.726 "uuid": "005b3423-580c-4bc6-a314-019af8b3a883", 00:12:53.726 "strip_size_kb": 64, 00:12:53.726 "state": "online", 00:12:53.726 "raid_level": "raid5f", 00:12:53.726 "superblock": false, 00:12:53.726 "num_base_bdevs": 3, 00:12:53.726 "num_base_bdevs_discovered": 3, 00:12:53.726 "num_base_bdevs_operational": 3, 00:12:53.726 "base_bdevs_list": [ 00:12:53.726 { 00:12:53.726 "name": "NewBaseBdev", 00:12:53.726 "uuid": "7152dffa-cce5-44e8-9d92-c986a24fd895", 00:12:53.726 "is_configured": true, 00:12:53.726 "data_offset": 0, 00:12:53.726 "data_size": 65536 00:12:53.726 }, 00:12:53.726 { 00:12:53.726 "name": "BaseBdev2", 00:12:53.726 "uuid": "68e533da-1f73-4a00-b79e-1e90e7eccc69", 00:12:53.726 "is_configured": true, 00:12:53.726 "data_offset": 0, 00:12:53.726 "data_size": 65536 00:12:53.726 }, 00:12:53.726 { 00:12:53.726 "name": "BaseBdev3", 00:12:53.726 "uuid": "c12d612f-2f0b-41bb-aeef-381fa796e2c4", 00:12:53.726 "is_configured": true, 00:12:53.726 "data_offset": 0, 00:12:53.726 "data_size": 65536 00:12:53.726 } 00:12:53.726 ] 00:12:53.726 } 00:12:53.726 } 00:12:53.726 }' 00:12:53.726 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:53.986 BaseBdev2 00:12:53.986 BaseBdev3' 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.986 23:52:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.986 [2024-11-02 23:52:48.052240] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:53.986 [2024-11-02 23:52:48.052266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.986 [2024-11-02 23:52:48.052340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.986 [2024-11-02 23:52:48.052571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.986 [2024-11-02 23:52:48.052583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90253 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 90253 ']' 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 90253 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:53.986 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90253 00:12:54.246 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:54.246 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:54.246 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90253' 00:12:54.246 killing process with pid 90253 00:12:54.246 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 90253 00:12:54.246 [2024-11-02 23:52:48.100211] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.246 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 90253 00:12:54.246 [2024-11-02 23:52:48.130916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:54.506 00:12:54.506 real 0m8.948s 00:12:54.506 user 0m15.318s 00:12:54.506 sys 0m1.940s 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.506 ************************************ 00:12:54.506 END TEST raid5f_state_function_test 00:12:54.506 ************************************ 00:12:54.506 23:52:48 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:12:54.506 23:52:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:54.506 23:52:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.506 23:52:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.506 ************************************ 00:12:54.506 START TEST raid5f_state_function_test_sb 00:12:54.506 ************************************ 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:54.506 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=90852 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90852' 00:12:54.507 Process raid pid: 90852 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 90852 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 90852 ']' 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:54.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:54.507 23:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.507 [2024-11-02 23:52:48.509848] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:12:54.507 [2024-11-02 23:52:48.509974] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.766 [2024-11-02 23:52:48.664637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.766 [2024-11-02 23:52:48.689782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.766 [2024-11-02 23:52:48.731088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.766 [2024-11-02 23:52:48.731124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.336 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:55.336 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:55.336 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:55.336 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.336 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.336 [2024-11-02 23:52:49.347538] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.336 [2024-11-02 23:52:49.347591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.336 [2024-11-02 23:52:49.347600] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.336 [2024-11-02 23:52:49.347610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.337 [2024-11-02 23:52:49.347616] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.337 [2024-11-02 23:52:49.347626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.337 "name": "Existed_Raid", 00:12:55.337 "uuid": "edb99955-8114-4dc3-b94c-ae3ca9a3eed3", 00:12:55.337 "strip_size_kb": 64, 00:12:55.337 "state": "configuring", 00:12:55.337 "raid_level": "raid5f", 00:12:55.337 "superblock": true, 00:12:55.337 "num_base_bdevs": 3, 00:12:55.337 "num_base_bdevs_discovered": 0, 00:12:55.337 "num_base_bdevs_operational": 3, 00:12:55.337 "base_bdevs_list": [ 00:12:55.337 { 00:12:55.337 "name": "BaseBdev1", 00:12:55.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.337 "is_configured": false, 00:12:55.337 "data_offset": 0, 00:12:55.337 "data_size": 0 00:12:55.337 }, 00:12:55.337 { 00:12:55.337 "name": "BaseBdev2", 00:12:55.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.337 "is_configured": false, 00:12:55.337 "data_offset": 0, 00:12:55.337 "data_size": 0 00:12:55.337 }, 00:12:55.337 { 00:12:55.337 "name": "BaseBdev3", 00:12:55.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.337 "is_configured": false, 00:12:55.337 "data_offset": 0, 00:12:55.337 "data_size": 0 00:12:55.337 } 00:12:55.337 ] 00:12:55.337 }' 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.337 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.906 [2024-11-02 23:52:49.826642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.906 [2024-11-02 23:52:49.826681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.906 [2024-11-02 23:52:49.838596] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.906 [2024-11-02 23:52:49.838635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.906 [2024-11-02 23:52:49.838644] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.906 [2024-11-02 23:52:49.838652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.906 [2024-11-02 23:52:49.838658] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.906 [2024-11-02 23:52:49.838666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.906 [2024-11-02 23:52:49.859247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.906 BaseBdev1 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.906 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.906 [ 00:12:55.906 { 00:12:55.906 "name": "BaseBdev1", 00:12:55.906 "aliases": [ 00:12:55.906 "b3d4d432-807f-4dcd-82d2-5019582b1086" 00:12:55.906 ], 00:12:55.906 "product_name": "Malloc disk", 00:12:55.906 "block_size": 512, 00:12:55.906 "num_blocks": 65536, 00:12:55.906 "uuid": "b3d4d432-807f-4dcd-82d2-5019582b1086", 00:12:55.906 "assigned_rate_limits": { 00:12:55.906 "rw_ios_per_sec": 0, 00:12:55.906 "rw_mbytes_per_sec": 0, 00:12:55.906 "r_mbytes_per_sec": 0, 00:12:55.906 "w_mbytes_per_sec": 0 00:12:55.906 }, 00:12:55.906 "claimed": true, 00:12:55.906 "claim_type": "exclusive_write", 00:12:55.906 "zoned": false, 00:12:55.906 "supported_io_types": { 00:12:55.906 "read": true, 00:12:55.906 "write": true, 00:12:55.906 "unmap": true, 00:12:55.906 "flush": true, 00:12:55.906 "reset": true, 00:12:55.906 "nvme_admin": false, 00:12:55.906 "nvme_io": false, 00:12:55.906 "nvme_io_md": false, 00:12:55.906 "write_zeroes": true, 00:12:55.906 "zcopy": true, 00:12:55.906 "get_zone_info": false, 00:12:55.906 "zone_management": false, 00:12:55.906 "zone_append": false, 00:12:55.906 "compare": false, 00:12:55.906 "compare_and_write": false, 00:12:55.906 "abort": true, 00:12:55.906 "seek_hole": false, 00:12:55.906 "seek_data": false, 00:12:55.906 "copy": true, 00:12:55.906 "nvme_iov_md": false 00:12:55.906 }, 00:12:55.906 "memory_domains": [ 00:12:55.906 { 00:12:55.906 "dma_device_id": "system", 00:12:55.906 "dma_device_type": 1 00:12:55.906 }, 00:12:55.906 { 00:12:55.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.906 "dma_device_type": 2 00:12:55.906 } 00:12:55.906 ], 00:12:55.906 "driver_specific": {} 00:12:55.906 } 00:12:55.906 ] 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.907 "name": "Existed_Raid", 00:12:55.907 "uuid": "588fbd24-63b4-477d-bf59-06f9a2d89876", 00:12:55.907 "strip_size_kb": 64, 00:12:55.907 "state": "configuring", 00:12:55.907 "raid_level": "raid5f", 00:12:55.907 "superblock": true, 00:12:55.907 "num_base_bdevs": 3, 00:12:55.907 "num_base_bdevs_discovered": 1, 00:12:55.907 "num_base_bdevs_operational": 3, 00:12:55.907 "base_bdevs_list": [ 00:12:55.907 { 00:12:55.907 "name": "BaseBdev1", 00:12:55.907 "uuid": "b3d4d432-807f-4dcd-82d2-5019582b1086", 00:12:55.907 "is_configured": true, 00:12:55.907 "data_offset": 2048, 00:12:55.907 "data_size": 63488 00:12:55.907 }, 00:12:55.907 { 00:12:55.907 "name": "BaseBdev2", 00:12:55.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.907 "is_configured": false, 00:12:55.907 "data_offset": 0, 00:12:55.907 "data_size": 0 00:12:55.907 }, 00:12:55.907 { 00:12:55.907 "name": "BaseBdev3", 00:12:55.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.907 "is_configured": false, 00:12:55.907 "data_offset": 0, 00:12:55.907 "data_size": 0 00:12:55.907 } 00:12:55.907 ] 00:12:55.907 }' 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.907 23:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.477 [2024-11-02 23:52:50.290542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.477 [2024-11-02 23:52:50.290592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.477 [2024-11-02 23:52:50.302552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.477 [2024-11-02 23:52:50.304303] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.477 [2024-11-02 23:52:50.304344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.477 [2024-11-02 23:52:50.304353] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:56.477 [2024-11-02 23:52:50.304363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.477 "name": "Existed_Raid", 00:12:56.477 "uuid": "9be066a6-0128-44aa-8212-6f0e08330893", 00:12:56.477 "strip_size_kb": 64, 00:12:56.477 "state": "configuring", 00:12:56.477 "raid_level": "raid5f", 00:12:56.477 "superblock": true, 00:12:56.477 "num_base_bdevs": 3, 00:12:56.477 "num_base_bdevs_discovered": 1, 00:12:56.477 "num_base_bdevs_operational": 3, 00:12:56.477 "base_bdevs_list": [ 00:12:56.477 { 00:12:56.477 "name": "BaseBdev1", 00:12:56.477 "uuid": "b3d4d432-807f-4dcd-82d2-5019582b1086", 00:12:56.477 "is_configured": true, 00:12:56.477 "data_offset": 2048, 00:12:56.477 "data_size": 63488 00:12:56.477 }, 00:12:56.477 { 00:12:56.477 "name": "BaseBdev2", 00:12:56.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.477 "is_configured": false, 00:12:56.477 "data_offset": 0, 00:12:56.477 "data_size": 0 00:12:56.477 }, 00:12:56.477 { 00:12:56.477 "name": "BaseBdev3", 00:12:56.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.477 "is_configured": false, 00:12:56.477 "data_offset": 0, 00:12:56.477 "data_size": 0 00:12:56.477 } 00:12:56.477 ] 00:12:56.477 }' 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.477 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.738 [2024-11-02 23:52:50.716964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.738 BaseBdev2 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.738 [ 00:12:56.738 { 00:12:56.738 "name": "BaseBdev2", 00:12:56.738 "aliases": [ 00:12:56.738 "30930292-2f34-4f81-a260-cb217bf4d244" 00:12:56.738 ], 00:12:56.738 "product_name": "Malloc disk", 00:12:56.738 "block_size": 512, 00:12:56.738 "num_blocks": 65536, 00:12:56.738 "uuid": "30930292-2f34-4f81-a260-cb217bf4d244", 00:12:56.738 "assigned_rate_limits": { 00:12:56.738 "rw_ios_per_sec": 0, 00:12:56.738 "rw_mbytes_per_sec": 0, 00:12:56.738 "r_mbytes_per_sec": 0, 00:12:56.738 "w_mbytes_per_sec": 0 00:12:56.738 }, 00:12:56.738 "claimed": true, 00:12:56.738 "claim_type": "exclusive_write", 00:12:56.738 "zoned": false, 00:12:56.738 "supported_io_types": { 00:12:56.738 "read": true, 00:12:56.738 "write": true, 00:12:56.738 "unmap": true, 00:12:56.738 "flush": true, 00:12:56.738 "reset": true, 00:12:56.738 "nvme_admin": false, 00:12:56.738 "nvme_io": false, 00:12:56.738 "nvme_io_md": false, 00:12:56.738 "write_zeroes": true, 00:12:56.738 "zcopy": true, 00:12:56.738 "get_zone_info": false, 00:12:56.738 "zone_management": false, 00:12:56.738 "zone_append": false, 00:12:56.738 "compare": false, 00:12:56.738 "compare_and_write": false, 00:12:56.738 "abort": true, 00:12:56.738 "seek_hole": false, 00:12:56.738 "seek_data": false, 00:12:56.738 "copy": true, 00:12:56.738 "nvme_iov_md": false 00:12:56.738 }, 00:12:56.738 "memory_domains": [ 00:12:56.738 { 00:12:56.738 "dma_device_id": "system", 00:12:56.738 "dma_device_type": 1 00:12:56.738 }, 00:12:56.738 { 00:12:56.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.738 "dma_device_type": 2 00:12:56.738 } 00:12:56.738 ], 00:12:56.738 "driver_specific": {} 00:12:56.738 } 00:12:56.738 ] 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.738 "name": "Existed_Raid", 00:12:56.738 "uuid": "9be066a6-0128-44aa-8212-6f0e08330893", 00:12:56.738 "strip_size_kb": 64, 00:12:56.738 "state": "configuring", 00:12:56.738 "raid_level": "raid5f", 00:12:56.738 "superblock": true, 00:12:56.738 "num_base_bdevs": 3, 00:12:56.738 "num_base_bdevs_discovered": 2, 00:12:56.738 "num_base_bdevs_operational": 3, 00:12:56.738 "base_bdevs_list": [ 00:12:56.738 { 00:12:56.738 "name": "BaseBdev1", 00:12:56.738 "uuid": "b3d4d432-807f-4dcd-82d2-5019582b1086", 00:12:56.738 "is_configured": true, 00:12:56.738 "data_offset": 2048, 00:12:56.738 "data_size": 63488 00:12:56.738 }, 00:12:56.738 { 00:12:56.738 "name": "BaseBdev2", 00:12:56.738 "uuid": "30930292-2f34-4f81-a260-cb217bf4d244", 00:12:56.738 "is_configured": true, 00:12:56.738 "data_offset": 2048, 00:12:56.738 "data_size": 63488 00:12:56.738 }, 00:12:56.738 { 00:12:56.738 "name": "BaseBdev3", 00:12:56.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.738 "is_configured": false, 00:12:56.738 "data_offset": 0, 00:12:56.738 "data_size": 0 00:12:56.738 } 00:12:56.738 ] 00:12:56.738 }' 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.738 23:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.308 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:57.308 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.308 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.308 [2024-11-02 23:52:51.239253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.308 [2024-11-02 23:52:51.239553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:57.308 [2024-11-02 23:52:51.239619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:57.308 [2024-11-02 23:52:51.240026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:57.308 BaseBdev3 00:12:57.308 [2024-11-02 23:52:51.240583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:57.308 [2024-11-02 23:52:51.240646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:57.308 [2024-11-02 23:52:51.240894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.308 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.308 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:57.308 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:57.308 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.309 [ 00:12:57.309 { 00:12:57.309 "name": "BaseBdev3", 00:12:57.309 "aliases": [ 00:12:57.309 "8557c034-5f2f-43f8-b259-365db020f6b3" 00:12:57.309 ], 00:12:57.309 "product_name": "Malloc disk", 00:12:57.309 "block_size": 512, 00:12:57.309 "num_blocks": 65536, 00:12:57.309 "uuid": "8557c034-5f2f-43f8-b259-365db020f6b3", 00:12:57.309 "assigned_rate_limits": { 00:12:57.309 "rw_ios_per_sec": 0, 00:12:57.309 "rw_mbytes_per_sec": 0, 00:12:57.309 "r_mbytes_per_sec": 0, 00:12:57.309 "w_mbytes_per_sec": 0 00:12:57.309 }, 00:12:57.309 "claimed": true, 00:12:57.309 "claim_type": "exclusive_write", 00:12:57.309 "zoned": false, 00:12:57.309 "supported_io_types": { 00:12:57.309 "read": true, 00:12:57.309 "write": true, 00:12:57.309 "unmap": true, 00:12:57.309 "flush": true, 00:12:57.309 "reset": true, 00:12:57.309 "nvme_admin": false, 00:12:57.309 "nvme_io": false, 00:12:57.309 "nvme_io_md": false, 00:12:57.309 "write_zeroes": true, 00:12:57.309 "zcopy": true, 00:12:57.309 "get_zone_info": false, 00:12:57.309 "zone_management": false, 00:12:57.309 "zone_append": false, 00:12:57.309 "compare": false, 00:12:57.309 "compare_and_write": false, 00:12:57.309 "abort": true, 00:12:57.309 "seek_hole": false, 00:12:57.309 "seek_data": false, 00:12:57.309 "copy": true, 00:12:57.309 "nvme_iov_md": false 00:12:57.309 }, 00:12:57.309 "memory_domains": [ 00:12:57.309 { 00:12:57.309 "dma_device_id": "system", 00:12:57.309 "dma_device_type": 1 00:12:57.309 }, 00:12:57.309 { 00:12:57.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.309 "dma_device_type": 2 00:12:57.309 } 00:12:57.309 ], 00:12:57.309 "driver_specific": {} 00:12:57.309 } 00:12:57.309 ] 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.309 "name": "Existed_Raid", 00:12:57.309 "uuid": "9be066a6-0128-44aa-8212-6f0e08330893", 00:12:57.309 "strip_size_kb": 64, 00:12:57.309 "state": "online", 00:12:57.309 "raid_level": "raid5f", 00:12:57.309 "superblock": true, 00:12:57.309 "num_base_bdevs": 3, 00:12:57.309 "num_base_bdevs_discovered": 3, 00:12:57.309 "num_base_bdevs_operational": 3, 00:12:57.309 "base_bdevs_list": [ 00:12:57.309 { 00:12:57.309 "name": "BaseBdev1", 00:12:57.309 "uuid": "b3d4d432-807f-4dcd-82d2-5019582b1086", 00:12:57.309 "is_configured": true, 00:12:57.309 "data_offset": 2048, 00:12:57.309 "data_size": 63488 00:12:57.309 }, 00:12:57.309 { 00:12:57.309 "name": "BaseBdev2", 00:12:57.309 "uuid": "30930292-2f34-4f81-a260-cb217bf4d244", 00:12:57.309 "is_configured": true, 00:12:57.309 "data_offset": 2048, 00:12:57.309 "data_size": 63488 00:12:57.309 }, 00:12:57.309 { 00:12:57.309 "name": "BaseBdev3", 00:12:57.309 "uuid": "8557c034-5f2f-43f8-b259-365db020f6b3", 00:12:57.309 "is_configured": true, 00:12:57.309 "data_offset": 2048, 00:12:57.309 "data_size": 63488 00:12:57.309 } 00:12:57.309 ] 00:12:57.309 }' 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.309 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.880 [2024-11-02 23:52:51.774725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.880 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.880 "name": "Existed_Raid", 00:12:57.880 "aliases": [ 00:12:57.880 "9be066a6-0128-44aa-8212-6f0e08330893" 00:12:57.880 ], 00:12:57.880 "product_name": "Raid Volume", 00:12:57.880 "block_size": 512, 00:12:57.880 "num_blocks": 126976, 00:12:57.880 "uuid": "9be066a6-0128-44aa-8212-6f0e08330893", 00:12:57.880 "assigned_rate_limits": { 00:12:57.880 "rw_ios_per_sec": 0, 00:12:57.880 "rw_mbytes_per_sec": 0, 00:12:57.880 "r_mbytes_per_sec": 0, 00:12:57.880 "w_mbytes_per_sec": 0 00:12:57.880 }, 00:12:57.880 "claimed": false, 00:12:57.880 "zoned": false, 00:12:57.880 "supported_io_types": { 00:12:57.880 "read": true, 00:12:57.880 "write": true, 00:12:57.880 "unmap": false, 00:12:57.880 "flush": false, 00:12:57.880 "reset": true, 00:12:57.880 "nvme_admin": false, 00:12:57.880 "nvme_io": false, 00:12:57.880 "nvme_io_md": false, 00:12:57.880 "write_zeroes": true, 00:12:57.880 "zcopy": false, 00:12:57.880 "get_zone_info": false, 00:12:57.880 "zone_management": false, 00:12:57.880 "zone_append": false, 00:12:57.880 "compare": false, 00:12:57.880 "compare_and_write": false, 00:12:57.880 "abort": false, 00:12:57.880 "seek_hole": false, 00:12:57.880 "seek_data": false, 00:12:57.880 "copy": false, 00:12:57.880 "nvme_iov_md": false 00:12:57.880 }, 00:12:57.880 "driver_specific": { 00:12:57.880 "raid": { 00:12:57.880 "uuid": "9be066a6-0128-44aa-8212-6f0e08330893", 00:12:57.880 "strip_size_kb": 64, 00:12:57.880 "state": "online", 00:12:57.880 "raid_level": "raid5f", 00:12:57.880 "superblock": true, 00:12:57.880 "num_base_bdevs": 3, 00:12:57.880 "num_base_bdevs_discovered": 3, 00:12:57.880 "num_base_bdevs_operational": 3, 00:12:57.880 "base_bdevs_list": [ 00:12:57.880 { 00:12:57.880 "name": "BaseBdev1", 00:12:57.880 "uuid": "b3d4d432-807f-4dcd-82d2-5019582b1086", 00:12:57.880 "is_configured": true, 00:12:57.880 "data_offset": 2048, 00:12:57.880 "data_size": 63488 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "name": "BaseBdev2", 00:12:57.881 "uuid": "30930292-2f34-4f81-a260-cb217bf4d244", 00:12:57.881 "is_configured": true, 00:12:57.881 "data_offset": 2048, 00:12:57.881 "data_size": 63488 00:12:57.881 }, 00:12:57.881 { 00:12:57.881 "name": "BaseBdev3", 00:12:57.881 "uuid": "8557c034-5f2f-43f8-b259-365db020f6b3", 00:12:57.881 "is_configured": true, 00:12:57.881 "data_offset": 2048, 00:12:57.881 "data_size": 63488 00:12:57.881 } 00:12:57.881 ] 00:12:57.881 } 00:12:57.881 } 00:12:57.881 }' 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:57.881 BaseBdev2 00:12:57.881 BaseBdev3' 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.881 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.141 23:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.141 [2024-11-02 23:52:52.070046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.141 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.141 "name": "Existed_Raid", 00:12:58.141 "uuid": "9be066a6-0128-44aa-8212-6f0e08330893", 00:12:58.141 "strip_size_kb": 64, 00:12:58.142 "state": "online", 00:12:58.142 "raid_level": "raid5f", 00:12:58.142 "superblock": true, 00:12:58.142 "num_base_bdevs": 3, 00:12:58.142 "num_base_bdevs_discovered": 2, 00:12:58.142 "num_base_bdevs_operational": 2, 00:12:58.142 "base_bdevs_list": [ 00:12:58.142 { 00:12:58.142 "name": null, 00:12:58.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.142 "is_configured": false, 00:12:58.142 "data_offset": 0, 00:12:58.142 "data_size": 63488 00:12:58.142 }, 00:12:58.142 { 00:12:58.142 "name": "BaseBdev2", 00:12:58.142 "uuid": "30930292-2f34-4f81-a260-cb217bf4d244", 00:12:58.142 "is_configured": true, 00:12:58.142 "data_offset": 2048, 00:12:58.142 "data_size": 63488 00:12:58.142 }, 00:12:58.142 { 00:12:58.142 "name": "BaseBdev3", 00:12:58.142 "uuid": "8557c034-5f2f-43f8-b259-365db020f6b3", 00:12:58.142 "is_configured": true, 00:12:58.142 "data_offset": 2048, 00:12:58.142 "data_size": 63488 00:12:58.142 } 00:12:58.142 ] 00:12:58.142 }' 00:12:58.142 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.142 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.741 [2024-11-02 23:52:52.592412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:58.741 [2024-11-02 23:52:52.592609] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.741 [2024-11-02 23:52:52.603494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.741 [2024-11-02 23:52:52.651437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:58.741 [2024-11-02 23:52:52.651540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.741 BaseBdev2 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.741 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.741 [ 00:12:58.741 { 00:12:58.741 "name": "BaseBdev2", 00:12:58.741 "aliases": [ 00:12:58.741 "0b193bdf-d590-4c58-876e-9a91ef0b8d7d" 00:12:58.741 ], 00:12:58.741 "product_name": "Malloc disk", 00:12:58.741 "block_size": 512, 00:12:58.741 "num_blocks": 65536, 00:12:58.741 "uuid": "0b193bdf-d590-4c58-876e-9a91ef0b8d7d", 00:12:58.741 "assigned_rate_limits": { 00:12:58.741 "rw_ios_per_sec": 0, 00:12:58.741 "rw_mbytes_per_sec": 0, 00:12:58.741 "r_mbytes_per_sec": 0, 00:12:58.741 "w_mbytes_per_sec": 0 00:12:58.741 }, 00:12:58.741 "claimed": false, 00:12:58.741 "zoned": false, 00:12:58.742 "supported_io_types": { 00:12:58.742 "read": true, 00:12:58.742 "write": true, 00:12:58.742 "unmap": true, 00:12:58.742 "flush": true, 00:12:58.742 "reset": true, 00:12:58.742 "nvme_admin": false, 00:12:58.742 "nvme_io": false, 00:12:58.742 "nvme_io_md": false, 00:12:58.742 "write_zeroes": true, 00:12:58.742 "zcopy": true, 00:12:58.742 "get_zone_info": false, 00:12:58.742 "zone_management": false, 00:12:58.742 "zone_append": false, 00:12:58.742 "compare": false, 00:12:58.742 "compare_and_write": false, 00:12:58.742 "abort": true, 00:12:58.742 "seek_hole": false, 00:12:58.742 "seek_data": false, 00:12:58.742 "copy": true, 00:12:58.742 "nvme_iov_md": false 00:12:58.742 }, 00:12:58.742 "memory_domains": [ 00:12:58.742 { 00:12:58.742 "dma_device_id": "system", 00:12:58.742 "dma_device_type": 1 00:12:58.742 }, 00:12:58.742 { 00:12:58.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.742 "dma_device_type": 2 00:12:58.742 } 00:12:58.742 ], 00:12:58.742 "driver_specific": {} 00:12:58.742 } 00:12:58.742 ] 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.742 BaseBdev3 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.742 [ 00:12:58.742 { 00:12:58.742 "name": "BaseBdev3", 00:12:58.742 "aliases": [ 00:12:58.742 "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b" 00:12:58.742 ], 00:12:58.742 "product_name": "Malloc disk", 00:12:58.742 "block_size": 512, 00:12:58.742 "num_blocks": 65536, 00:12:58.742 "uuid": "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b", 00:12:58.742 "assigned_rate_limits": { 00:12:58.742 "rw_ios_per_sec": 0, 00:12:58.742 "rw_mbytes_per_sec": 0, 00:12:58.742 "r_mbytes_per_sec": 0, 00:12:58.742 "w_mbytes_per_sec": 0 00:12:58.742 }, 00:12:58.742 "claimed": false, 00:12:58.742 "zoned": false, 00:12:58.742 "supported_io_types": { 00:12:58.742 "read": true, 00:12:58.742 "write": true, 00:12:58.742 "unmap": true, 00:12:58.742 "flush": true, 00:12:58.742 "reset": true, 00:12:58.742 "nvme_admin": false, 00:12:58.742 "nvme_io": false, 00:12:58.742 "nvme_io_md": false, 00:12:58.742 "write_zeroes": true, 00:12:58.742 "zcopy": true, 00:12:58.742 "get_zone_info": false, 00:12:58.742 "zone_management": false, 00:12:58.742 "zone_append": false, 00:12:58.742 "compare": false, 00:12:58.742 "compare_and_write": false, 00:12:58.742 "abort": true, 00:12:58.742 "seek_hole": false, 00:12:58.742 "seek_data": false, 00:12:58.742 "copy": true, 00:12:58.742 "nvme_iov_md": false 00:12:58.742 }, 00:12:58.742 "memory_domains": [ 00:12:58.742 { 00:12:58.742 "dma_device_id": "system", 00:12:58.742 "dma_device_type": 1 00:12:58.742 }, 00:12:58.742 { 00:12:58.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.742 "dma_device_type": 2 00:12:58.742 } 00:12:58.742 ], 00:12:58.742 "driver_specific": {} 00:12:58.742 } 00:12:58.742 ] 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.742 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.003 [2024-11-02 23:52:52.818719] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.003 [2024-11-02 23:52:52.818817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.003 [2024-11-02 23:52:52.818860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.003 [2024-11-02 23:52:52.820694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.003 "name": "Existed_Raid", 00:12:59.003 "uuid": "8d557cb4-e884-44b2-8589-665cf4a8057d", 00:12:59.003 "strip_size_kb": 64, 00:12:59.003 "state": "configuring", 00:12:59.003 "raid_level": "raid5f", 00:12:59.003 "superblock": true, 00:12:59.003 "num_base_bdevs": 3, 00:12:59.003 "num_base_bdevs_discovered": 2, 00:12:59.003 "num_base_bdevs_operational": 3, 00:12:59.003 "base_bdevs_list": [ 00:12:59.003 { 00:12:59.003 "name": "BaseBdev1", 00:12:59.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.003 "is_configured": false, 00:12:59.003 "data_offset": 0, 00:12:59.003 "data_size": 0 00:12:59.003 }, 00:12:59.003 { 00:12:59.003 "name": "BaseBdev2", 00:12:59.003 "uuid": "0b193bdf-d590-4c58-876e-9a91ef0b8d7d", 00:12:59.003 "is_configured": true, 00:12:59.003 "data_offset": 2048, 00:12:59.003 "data_size": 63488 00:12:59.003 }, 00:12:59.003 { 00:12:59.003 "name": "BaseBdev3", 00:12:59.003 "uuid": "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b", 00:12:59.003 "is_configured": true, 00:12:59.003 "data_offset": 2048, 00:12:59.003 "data_size": 63488 00:12:59.003 } 00:12:59.003 ] 00:12:59.003 }' 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.003 23:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.263 [2024-11-02 23:52:53.241984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.263 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.264 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.264 "name": "Existed_Raid", 00:12:59.264 "uuid": "8d557cb4-e884-44b2-8589-665cf4a8057d", 00:12:59.264 "strip_size_kb": 64, 00:12:59.264 "state": "configuring", 00:12:59.264 "raid_level": "raid5f", 00:12:59.264 "superblock": true, 00:12:59.264 "num_base_bdevs": 3, 00:12:59.264 "num_base_bdevs_discovered": 1, 00:12:59.264 "num_base_bdevs_operational": 3, 00:12:59.264 "base_bdevs_list": [ 00:12:59.264 { 00:12:59.264 "name": "BaseBdev1", 00:12:59.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.264 "is_configured": false, 00:12:59.264 "data_offset": 0, 00:12:59.264 "data_size": 0 00:12:59.264 }, 00:12:59.264 { 00:12:59.264 "name": null, 00:12:59.264 "uuid": "0b193bdf-d590-4c58-876e-9a91ef0b8d7d", 00:12:59.264 "is_configured": false, 00:12:59.264 "data_offset": 0, 00:12:59.264 "data_size": 63488 00:12:59.264 }, 00:12:59.264 { 00:12:59.264 "name": "BaseBdev3", 00:12:59.264 "uuid": "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b", 00:12:59.264 "is_configured": true, 00:12:59.264 "data_offset": 2048, 00:12:59.264 "data_size": 63488 00:12:59.264 } 00:12:59.264 ] 00:12:59.264 }' 00:12:59.264 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.264 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.833 [2024-11-02 23:52:53.740557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.833 BaseBdev1 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.833 [ 00:12:59.833 { 00:12:59.833 "name": "BaseBdev1", 00:12:59.833 "aliases": [ 00:12:59.833 "2df2a4cb-a350-48d6-b576-f189ceaf717d" 00:12:59.833 ], 00:12:59.833 "product_name": "Malloc disk", 00:12:59.833 "block_size": 512, 00:12:59.833 "num_blocks": 65536, 00:12:59.833 "uuid": "2df2a4cb-a350-48d6-b576-f189ceaf717d", 00:12:59.833 "assigned_rate_limits": { 00:12:59.833 "rw_ios_per_sec": 0, 00:12:59.833 "rw_mbytes_per_sec": 0, 00:12:59.833 "r_mbytes_per_sec": 0, 00:12:59.833 "w_mbytes_per_sec": 0 00:12:59.833 }, 00:12:59.833 "claimed": true, 00:12:59.833 "claim_type": "exclusive_write", 00:12:59.833 "zoned": false, 00:12:59.833 "supported_io_types": { 00:12:59.833 "read": true, 00:12:59.833 "write": true, 00:12:59.833 "unmap": true, 00:12:59.833 "flush": true, 00:12:59.833 "reset": true, 00:12:59.833 "nvme_admin": false, 00:12:59.833 "nvme_io": false, 00:12:59.833 "nvme_io_md": false, 00:12:59.833 "write_zeroes": true, 00:12:59.833 "zcopy": true, 00:12:59.833 "get_zone_info": false, 00:12:59.833 "zone_management": false, 00:12:59.833 "zone_append": false, 00:12:59.833 "compare": false, 00:12:59.833 "compare_and_write": false, 00:12:59.833 "abort": true, 00:12:59.833 "seek_hole": false, 00:12:59.833 "seek_data": false, 00:12:59.833 "copy": true, 00:12:59.833 "nvme_iov_md": false 00:12:59.833 }, 00:12:59.833 "memory_domains": [ 00:12:59.833 { 00:12:59.833 "dma_device_id": "system", 00:12:59.833 "dma_device_type": 1 00:12:59.833 }, 00:12:59.833 { 00:12:59.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.833 "dma_device_type": 2 00:12:59.833 } 00:12:59.833 ], 00:12:59.833 "driver_specific": {} 00:12:59.833 } 00:12:59.833 ] 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.833 "name": "Existed_Raid", 00:12:59.833 "uuid": "8d557cb4-e884-44b2-8589-665cf4a8057d", 00:12:59.833 "strip_size_kb": 64, 00:12:59.833 "state": "configuring", 00:12:59.833 "raid_level": "raid5f", 00:12:59.833 "superblock": true, 00:12:59.833 "num_base_bdevs": 3, 00:12:59.833 "num_base_bdevs_discovered": 2, 00:12:59.833 "num_base_bdevs_operational": 3, 00:12:59.833 "base_bdevs_list": [ 00:12:59.833 { 00:12:59.833 "name": "BaseBdev1", 00:12:59.833 "uuid": "2df2a4cb-a350-48d6-b576-f189ceaf717d", 00:12:59.833 "is_configured": true, 00:12:59.833 "data_offset": 2048, 00:12:59.833 "data_size": 63488 00:12:59.833 }, 00:12:59.833 { 00:12:59.833 "name": null, 00:12:59.833 "uuid": "0b193bdf-d590-4c58-876e-9a91ef0b8d7d", 00:12:59.833 "is_configured": false, 00:12:59.833 "data_offset": 0, 00:12:59.833 "data_size": 63488 00:12:59.833 }, 00:12:59.833 { 00:12:59.833 "name": "BaseBdev3", 00:12:59.833 "uuid": "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b", 00:12:59.833 "is_configured": true, 00:12:59.833 "data_offset": 2048, 00:12:59.833 "data_size": 63488 00:12:59.833 } 00:12:59.833 ] 00:12:59.833 }' 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.833 23:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.403 [2024-11-02 23:52:54.255704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.403 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.404 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.404 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.404 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.404 "name": "Existed_Raid", 00:13:00.404 "uuid": "8d557cb4-e884-44b2-8589-665cf4a8057d", 00:13:00.404 "strip_size_kb": 64, 00:13:00.404 "state": "configuring", 00:13:00.404 "raid_level": "raid5f", 00:13:00.404 "superblock": true, 00:13:00.404 "num_base_bdevs": 3, 00:13:00.404 "num_base_bdevs_discovered": 1, 00:13:00.404 "num_base_bdevs_operational": 3, 00:13:00.404 "base_bdevs_list": [ 00:13:00.404 { 00:13:00.404 "name": "BaseBdev1", 00:13:00.404 "uuid": "2df2a4cb-a350-48d6-b576-f189ceaf717d", 00:13:00.404 "is_configured": true, 00:13:00.404 "data_offset": 2048, 00:13:00.404 "data_size": 63488 00:13:00.404 }, 00:13:00.404 { 00:13:00.404 "name": null, 00:13:00.404 "uuid": "0b193bdf-d590-4c58-876e-9a91ef0b8d7d", 00:13:00.404 "is_configured": false, 00:13:00.404 "data_offset": 0, 00:13:00.404 "data_size": 63488 00:13:00.404 }, 00:13:00.404 { 00:13:00.404 "name": null, 00:13:00.404 "uuid": "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b", 00:13:00.404 "is_configured": false, 00:13:00.404 "data_offset": 0, 00:13:00.404 "data_size": 63488 00:13:00.404 } 00:13:00.404 ] 00:13:00.404 }' 00:13:00.404 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.404 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.664 [2024-11-02 23:52:54.690966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.664 "name": "Existed_Raid", 00:13:00.664 "uuid": "8d557cb4-e884-44b2-8589-665cf4a8057d", 00:13:00.664 "strip_size_kb": 64, 00:13:00.664 "state": "configuring", 00:13:00.664 "raid_level": "raid5f", 00:13:00.664 "superblock": true, 00:13:00.664 "num_base_bdevs": 3, 00:13:00.664 "num_base_bdevs_discovered": 2, 00:13:00.664 "num_base_bdevs_operational": 3, 00:13:00.664 "base_bdevs_list": [ 00:13:00.664 { 00:13:00.664 "name": "BaseBdev1", 00:13:00.664 "uuid": "2df2a4cb-a350-48d6-b576-f189ceaf717d", 00:13:00.664 "is_configured": true, 00:13:00.664 "data_offset": 2048, 00:13:00.664 "data_size": 63488 00:13:00.664 }, 00:13:00.664 { 00:13:00.664 "name": null, 00:13:00.664 "uuid": "0b193bdf-d590-4c58-876e-9a91ef0b8d7d", 00:13:00.664 "is_configured": false, 00:13:00.664 "data_offset": 0, 00:13:00.664 "data_size": 63488 00:13:00.664 }, 00:13:00.664 { 00:13:00.664 "name": "BaseBdev3", 00:13:00.664 "uuid": "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b", 00:13:00.664 "is_configured": true, 00:13:00.664 "data_offset": 2048, 00:13:00.664 "data_size": 63488 00:13:00.664 } 00:13:00.664 ] 00:13:00.664 }' 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.664 23:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.234 [2024-11-02 23:52:55.218105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.234 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.235 "name": "Existed_Raid", 00:13:01.235 "uuid": "8d557cb4-e884-44b2-8589-665cf4a8057d", 00:13:01.235 "strip_size_kb": 64, 00:13:01.235 "state": "configuring", 00:13:01.235 "raid_level": "raid5f", 00:13:01.235 "superblock": true, 00:13:01.235 "num_base_bdevs": 3, 00:13:01.235 "num_base_bdevs_discovered": 1, 00:13:01.235 "num_base_bdevs_operational": 3, 00:13:01.235 "base_bdevs_list": [ 00:13:01.235 { 00:13:01.235 "name": null, 00:13:01.235 "uuid": "2df2a4cb-a350-48d6-b576-f189ceaf717d", 00:13:01.235 "is_configured": false, 00:13:01.235 "data_offset": 0, 00:13:01.235 "data_size": 63488 00:13:01.235 }, 00:13:01.235 { 00:13:01.235 "name": null, 00:13:01.235 "uuid": "0b193bdf-d590-4c58-876e-9a91ef0b8d7d", 00:13:01.235 "is_configured": false, 00:13:01.235 "data_offset": 0, 00:13:01.235 "data_size": 63488 00:13:01.235 }, 00:13:01.235 { 00:13:01.235 "name": "BaseBdev3", 00:13:01.235 "uuid": "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b", 00:13:01.235 "is_configured": true, 00:13:01.235 "data_offset": 2048, 00:13:01.235 "data_size": 63488 00:13:01.235 } 00:13:01.235 ] 00:13:01.235 }' 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.235 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.804 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.804 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:01.804 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.804 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.805 [2024-11-02 23:52:55.755548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.805 "name": "Existed_Raid", 00:13:01.805 "uuid": "8d557cb4-e884-44b2-8589-665cf4a8057d", 00:13:01.805 "strip_size_kb": 64, 00:13:01.805 "state": "configuring", 00:13:01.805 "raid_level": "raid5f", 00:13:01.805 "superblock": true, 00:13:01.805 "num_base_bdevs": 3, 00:13:01.805 "num_base_bdevs_discovered": 2, 00:13:01.805 "num_base_bdevs_operational": 3, 00:13:01.805 "base_bdevs_list": [ 00:13:01.805 { 00:13:01.805 "name": null, 00:13:01.805 "uuid": "2df2a4cb-a350-48d6-b576-f189ceaf717d", 00:13:01.805 "is_configured": false, 00:13:01.805 "data_offset": 0, 00:13:01.805 "data_size": 63488 00:13:01.805 }, 00:13:01.805 { 00:13:01.805 "name": "BaseBdev2", 00:13:01.805 "uuid": "0b193bdf-d590-4c58-876e-9a91ef0b8d7d", 00:13:01.805 "is_configured": true, 00:13:01.805 "data_offset": 2048, 00:13:01.805 "data_size": 63488 00:13:01.805 }, 00:13:01.805 { 00:13:01.805 "name": "BaseBdev3", 00:13:01.805 "uuid": "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b", 00:13:01.805 "is_configured": true, 00:13:01.805 "data_offset": 2048, 00:13:01.805 "data_size": 63488 00:13:01.805 } 00:13:01.805 ] 00:13:01.805 }' 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.805 23:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2df2a4cb-a350-48d6-b576-f189ceaf717d 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.376 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.376 NewBaseBdev 00:13:02.376 [2024-11-02 23:52:56.301522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:02.376 [2024-11-02 23:52:56.301693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:02.376 [2024-11-02 23:52:56.301710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:02.376 [2024-11-02 23:52:56.301955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:02.376 [2024-11-02 23:52:56.302369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:02.377 [2024-11-02 23:52:56.302388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:02.377 [2024-11-02 23:52:56.302516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.377 [ 00:13:02.377 { 00:13:02.377 "name": "NewBaseBdev", 00:13:02.377 "aliases": [ 00:13:02.377 "2df2a4cb-a350-48d6-b576-f189ceaf717d" 00:13:02.377 ], 00:13:02.377 "product_name": "Malloc disk", 00:13:02.377 "block_size": 512, 00:13:02.377 "num_blocks": 65536, 00:13:02.377 "uuid": "2df2a4cb-a350-48d6-b576-f189ceaf717d", 00:13:02.377 "assigned_rate_limits": { 00:13:02.377 "rw_ios_per_sec": 0, 00:13:02.377 "rw_mbytes_per_sec": 0, 00:13:02.377 "r_mbytes_per_sec": 0, 00:13:02.377 "w_mbytes_per_sec": 0 00:13:02.377 }, 00:13:02.377 "claimed": true, 00:13:02.377 "claim_type": "exclusive_write", 00:13:02.377 "zoned": false, 00:13:02.377 "supported_io_types": { 00:13:02.377 "read": true, 00:13:02.377 "write": true, 00:13:02.377 "unmap": true, 00:13:02.377 "flush": true, 00:13:02.377 "reset": true, 00:13:02.377 "nvme_admin": false, 00:13:02.377 "nvme_io": false, 00:13:02.377 "nvme_io_md": false, 00:13:02.377 "write_zeroes": true, 00:13:02.377 "zcopy": true, 00:13:02.377 "get_zone_info": false, 00:13:02.377 "zone_management": false, 00:13:02.377 "zone_append": false, 00:13:02.377 "compare": false, 00:13:02.377 "compare_and_write": false, 00:13:02.377 "abort": true, 00:13:02.377 "seek_hole": false, 00:13:02.377 "seek_data": false, 00:13:02.377 "copy": true, 00:13:02.377 "nvme_iov_md": false 00:13:02.377 }, 00:13:02.377 "memory_domains": [ 00:13:02.377 { 00:13:02.377 "dma_device_id": "system", 00:13:02.377 "dma_device_type": 1 00:13:02.377 }, 00:13:02.377 { 00:13:02.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.377 "dma_device_type": 2 00:13:02.377 } 00:13:02.377 ], 00:13:02.377 "driver_specific": {} 00:13:02.377 } 00:13:02.377 ] 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.377 "name": "Existed_Raid", 00:13:02.377 "uuid": "8d557cb4-e884-44b2-8589-665cf4a8057d", 00:13:02.377 "strip_size_kb": 64, 00:13:02.377 "state": "online", 00:13:02.377 "raid_level": "raid5f", 00:13:02.377 "superblock": true, 00:13:02.377 "num_base_bdevs": 3, 00:13:02.377 "num_base_bdevs_discovered": 3, 00:13:02.377 "num_base_bdevs_operational": 3, 00:13:02.377 "base_bdevs_list": [ 00:13:02.377 { 00:13:02.377 "name": "NewBaseBdev", 00:13:02.377 "uuid": "2df2a4cb-a350-48d6-b576-f189ceaf717d", 00:13:02.377 "is_configured": true, 00:13:02.377 "data_offset": 2048, 00:13:02.377 "data_size": 63488 00:13:02.377 }, 00:13:02.377 { 00:13:02.377 "name": "BaseBdev2", 00:13:02.377 "uuid": "0b193bdf-d590-4c58-876e-9a91ef0b8d7d", 00:13:02.377 "is_configured": true, 00:13:02.377 "data_offset": 2048, 00:13:02.377 "data_size": 63488 00:13:02.377 }, 00:13:02.377 { 00:13:02.377 "name": "BaseBdev3", 00:13:02.377 "uuid": "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b", 00:13:02.377 "is_configured": true, 00:13:02.377 "data_offset": 2048, 00:13:02.377 "data_size": 63488 00:13:02.377 } 00:13:02.377 ] 00:13:02.377 }' 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.377 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.956 [2024-11-02 23:52:56.792892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.956 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:02.956 "name": "Existed_Raid", 00:13:02.956 "aliases": [ 00:13:02.956 "8d557cb4-e884-44b2-8589-665cf4a8057d" 00:13:02.956 ], 00:13:02.956 "product_name": "Raid Volume", 00:13:02.956 "block_size": 512, 00:13:02.956 "num_blocks": 126976, 00:13:02.956 "uuid": "8d557cb4-e884-44b2-8589-665cf4a8057d", 00:13:02.956 "assigned_rate_limits": { 00:13:02.956 "rw_ios_per_sec": 0, 00:13:02.956 "rw_mbytes_per_sec": 0, 00:13:02.956 "r_mbytes_per_sec": 0, 00:13:02.956 "w_mbytes_per_sec": 0 00:13:02.956 }, 00:13:02.956 "claimed": false, 00:13:02.956 "zoned": false, 00:13:02.956 "supported_io_types": { 00:13:02.956 "read": true, 00:13:02.956 "write": true, 00:13:02.956 "unmap": false, 00:13:02.956 "flush": false, 00:13:02.956 "reset": true, 00:13:02.956 "nvme_admin": false, 00:13:02.956 "nvme_io": false, 00:13:02.956 "nvme_io_md": false, 00:13:02.956 "write_zeroes": true, 00:13:02.956 "zcopy": false, 00:13:02.956 "get_zone_info": false, 00:13:02.956 "zone_management": false, 00:13:02.956 "zone_append": false, 00:13:02.956 "compare": false, 00:13:02.956 "compare_and_write": false, 00:13:02.956 "abort": false, 00:13:02.956 "seek_hole": false, 00:13:02.956 "seek_data": false, 00:13:02.956 "copy": false, 00:13:02.956 "nvme_iov_md": false 00:13:02.956 }, 00:13:02.956 "driver_specific": { 00:13:02.956 "raid": { 00:13:02.956 "uuid": "8d557cb4-e884-44b2-8589-665cf4a8057d", 00:13:02.956 "strip_size_kb": 64, 00:13:02.956 "state": "online", 00:13:02.956 "raid_level": "raid5f", 00:13:02.956 "superblock": true, 00:13:02.956 "num_base_bdevs": 3, 00:13:02.956 "num_base_bdevs_discovered": 3, 00:13:02.956 "num_base_bdevs_operational": 3, 00:13:02.956 "base_bdevs_list": [ 00:13:02.956 { 00:13:02.956 "name": "NewBaseBdev", 00:13:02.956 "uuid": "2df2a4cb-a350-48d6-b576-f189ceaf717d", 00:13:02.956 "is_configured": true, 00:13:02.956 "data_offset": 2048, 00:13:02.956 "data_size": 63488 00:13:02.956 }, 00:13:02.956 { 00:13:02.956 "name": "BaseBdev2", 00:13:02.957 "uuid": "0b193bdf-d590-4c58-876e-9a91ef0b8d7d", 00:13:02.957 "is_configured": true, 00:13:02.957 "data_offset": 2048, 00:13:02.957 "data_size": 63488 00:13:02.957 }, 00:13:02.957 { 00:13:02.957 "name": "BaseBdev3", 00:13:02.957 "uuid": "0ac3ebdd-d301-405a-ae9a-34574e0f1b2b", 00:13:02.957 "is_configured": true, 00:13:02.957 "data_offset": 2048, 00:13:02.957 "data_size": 63488 00:13:02.957 } 00:13:02.957 ] 00:13:02.957 } 00:13:02.957 } 00:13:02.957 }' 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:02.957 BaseBdev2 00:13:02.957 BaseBdev3' 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.957 23:52:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.957 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.957 [2024-11-02 23:52:57.048237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:02.957 [2024-11-02 23:52:57.048264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.957 [2024-11-02 23:52:57.048334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.957 [2024-11-02 23:52:57.048583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.957 [2024-11-02 23:52:57.048595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 90852 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 90852 ']' 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 90852 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90852 00:13:03.217 killing process with pid 90852 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90852' 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 90852 00:13:03.217 [2024-11-02 23:52:57.101369] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.217 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 90852 00:13:03.217 [2024-11-02 23:52:57.131409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.478 23:52:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:03.478 00:13:03.478 real 0m8.920s 00:13:03.478 user 0m15.271s 00:13:03.478 sys 0m1.862s 00:13:03.478 ************************************ 00:13:03.478 END TEST raid5f_state_function_test_sb 00:13:03.478 ************************************ 00:13:03.478 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.478 23:52:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.478 23:52:57 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:03.478 23:52:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:03.478 23:52:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.478 23:52:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.478 ************************************ 00:13:03.478 START TEST raid5f_superblock_test 00:13:03.478 ************************************ 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91458 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91458 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 91458 ']' 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:03.478 23:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.478 [2024-11-02 23:52:57.503570] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:13:03.478 [2024-11-02 23:52:57.503792] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91458 ] 00:13:03.745 [2024-11-02 23:52:57.656500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.745 [2024-11-02 23:52:57.680942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.745 [2024-11-02 23:52:57.722172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.745 [2024-11-02 23:52:57.722289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.321 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.321 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:04.321 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:04.321 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.321 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:04.321 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:04.321 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:04.321 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.321 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.321 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.322 malloc1 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.322 [2024-11-02 23:52:58.343587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:04.322 [2024-11-02 23:52:58.343705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.322 [2024-11-02 23:52:58.343765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:04.322 [2024-11-02 23:52:58.343800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.322 [2024-11-02 23:52:58.345828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.322 [2024-11-02 23:52:58.345922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:04.322 pt1 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.322 malloc2 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.322 [2024-11-02 23:52:58.371814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:04.322 [2024-11-02 23:52:58.371908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.322 [2024-11-02 23:52:58.371939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:04.322 [2024-11-02 23:52:58.371967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.322 [2024-11-02 23:52:58.373997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.322 [2024-11-02 23:52:58.374075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:04.322 pt2 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.322 malloc3 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.322 [2024-11-02 23:52:58.404284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:04.322 [2024-11-02 23:52:58.404374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.322 [2024-11-02 23:52:58.404427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:04.322 [2024-11-02 23:52:58.404456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.322 [2024-11-02 23:52:58.406484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.322 [2024-11-02 23:52:58.406520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:04.322 pt3 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.322 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.582 [2024-11-02 23:52:58.416333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:04.582 [2024-11-02 23:52:58.418255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:04.582 [2024-11-02 23:52:58.418347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:04.582 [2024-11-02 23:52:58.418546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:04.582 [2024-11-02 23:52:58.418590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:04.582 [2024-11-02 23:52:58.418866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:04.582 [2024-11-02 23:52:58.419347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:04.582 [2024-11-02 23:52:58.419397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:04.582 [2024-11-02 23:52:58.419552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.582 "name": "raid_bdev1", 00:13:04.582 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:04.582 "strip_size_kb": 64, 00:13:04.582 "state": "online", 00:13:04.582 "raid_level": "raid5f", 00:13:04.582 "superblock": true, 00:13:04.582 "num_base_bdevs": 3, 00:13:04.582 "num_base_bdevs_discovered": 3, 00:13:04.582 "num_base_bdevs_operational": 3, 00:13:04.582 "base_bdevs_list": [ 00:13:04.582 { 00:13:04.582 "name": "pt1", 00:13:04.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.582 "is_configured": true, 00:13:04.582 "data_offset": 2048, 00:13:04.582 "data_size": 63488 00:13:04.582 }, 00:13:04.582 { 00:13:04.582 "name": "pt2", 00:13:04.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.582 "is_configured": true, 00:13:04.582 "data_offset": 2048, 00:13:04.582 "data_size": 63488 00:13:04.582 }, 00:13:04.582 { 00:13:04.582 "name": "pt3", 00:13:04.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.582 "is_configured": true, 00:13:04.582 "data_offset": 2048, 00:13:04.582 "data_size": 63488 00:13:04.582 } 00:13:04.582 ] 00:13:04.582 }' 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.582 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:04.842 [2024-11-02 23:52:58.820377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.842 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:04.842 "name": "raid_bdev1", 00:13:04.842 "aliases": [ 00:13:04.842 "16b60026-7d86-46f4-9252-d74f423f7d17" 00:13:04.842 ], 00:13:04.842 "product_name": "Raid Volume", 00:13:04.842 "block_size": 512, 00:13:04.842 "num_blocks": 126976, 00:13:04.842 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:04.843 "assigned_rate_limits": { 00:13:04.843 "rw_ios_per_sec": 0, 00:13:04.843 "rw_mbytes_per_sec": 0, 00:13:04.843 "r_mbytes_per_sec": 0, 00:13:04.843 "w_mbytes_per_sec": 0 00:13:04.843 }, 00:13:04.843 "claimed": false, 00:13:04.843 "zoned": false, 00:13:04.843 "supported_io_types": { 00:13:04.843 "read": true, 00:13:04.843 "write": true, 00:13:04.843 "unmap": false, 00:13:04.843 "flush": false, 00:13:04.843 "reset": true, 00:13:04.843 "nvme_admin": false, 00:13:04.843 "nvme_io": false, 00:13:04.843 "nvme_io_md": false, 00:13:04.843 "write_zeroes": true, 00:13:04.843 "zcopy": false, 00:13:04.843 "get_zone_info": false, 00:13:04.843 "zone_management": false, 00:13:04.843 "zone_append": false, 00:13:04.843 "compare": false, 00:13:04.843 "compare_and_write": false, 00:13:04.843 "abort": false, 00:13:04.843 "seek_hole": false, 00:13:04.843 "seek_data": false, 00:13:04.843 "copy": false, 00:13:04.843 "nvme_iov_md": false 00:13:04.843 }, 00:13:04.843 "driver_specific": { 00:13:04.843 "raid": { 00:13:04.843 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:04.843 "strip_size_kb": 64, 00:13:04.843 "state": "online", 00:13:04.843 "raid_level": "raid5f", 00:13:04.843 "superblock": true, 00:13:04.843 "num_base_bdevs": 3, 00:13:04.843 "num_base_bdevs_discovered": 3, 00:13:04.843 "num_base_bdevs_operational": 3, 00:13:04.843 "base_bdevs_list": [ 00:13:04.843 { 00:13:04.843 "name": "pt1", 00:13:04.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.843 "is_configured": true, 00:13:04.843 "data_offset": 2048, 00:13:04.843 "data_size": 63488 00:13:04.843 }, 00:13:04.843 { 00:13:04.843 "name": "pt2", 00:13:04.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.843 "is_configured": true, 00:13:04.843 "data_offset": 2048, 00:13:04.843 "data_size": 63488 00:13:04.843 }, 00:13:04.843 { 00:13:04.843 "name": "pt3", 00:13:04.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.843 "is_configured": true, 00:13:04.843 "data_offset": 2048, 00:13:04.843 "data_size": 63488 00:13:04.843 } 00:13:04.843 ] 00:13:04.843 } 00:13:04.843 } 00:13:04.843 }' 00:13:04.843 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:04.843 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:04.843 pt2 00:13:04.843 pt3' 00:13:04.843 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.103 23:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.103 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.104 [2024-11-02 23:52:59.095896] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=16b60026-7d86-46f4-9252-d74f423f7d17 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 16b60026-7d86-46f4-9252-d74f423f7d17 ']' 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.104 [2024-11-02 23:52:59.139642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.104 [2024-11-02 23:52:59.139705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.104 [2024-11-02 23:52:59.139836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.104 [2024-11-02 23:52:59.139935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.104 [2024-11-02 23:52:59.139984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:05.104 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:05.364 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.365 [2024-11-02 23:52:59.283413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:05.365 [2024-11-02 23:52:59.285270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:05.365 [2024-11-02 23:52:59.285351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:05.365 [2024-11-02 23:52:59.285415] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:05.365 [2024-11-02 23:52:59.285498] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:05.365 [2024-11-02 23:52:59.285545] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:05.365 [2024-11-02 23:52:59.285594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.365 [2024-11-02 23:52:59.285637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:13:05.365 request: 00:13:05.365 { 00:13:05.365 "name": "raid_bdev1", 00:13:05.365 "raid_level": "raid5f", 00:13:05.365 "base_bdevs": [ 00:13:05.365 "malloc1", 00:13:05.365 "malloc2", 00:13:05.365 "malloc3" 00:13:05.365 ], 00:13:05.365 "strip_size_kb": 64, 00:13:05.365 "superblock": false, 00:13:05.365 "method": "bdev_raid_create", 00:13:05.365 "req_id": 1 00:13:05.365 } 00:13:05.365 Got JSON-RPC error response 00:13:05.365 response: 00:13:05.365 { 00:13:05.365 "code": -17, 00:13:05.365 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:05.365 } 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.365 [2024-11-02 23:52:59.347270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:05.365 [2024-11-02 23:52:59.347349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.365 [2024-11-02 23:52:59.347378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:05.365 [2024-11-02 23:52:59.347407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.365 [2024-11-02 23:52:59.349377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.365 [2024-11-02 23:52:59.349442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:05.365 [2024-11-02 23:52:59.349537] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:05.365 [2024-11-02 23:52:59.349586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:05.365 pt1 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.365 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.365 "name": "raid_bdev1", 00:13:05.365 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:05.365 "strip_size_kb": 64, 00:13:05.365 "state": "configuring", 00:13:05.365 "raid_level": "raid5f", 00:13:05.365 "superblock": true, 00:13:05.365 "num_base_bdevs": 3, 00:13:05.365 "num_base_bdevs_discovered": 1, 00:13:05.365 "num_base_bdevs_operational": 3, 00:13:05.365 "base_bdevs_list": [ 00:13:05.365 { 00:13:05.365 "name": "pt1", 00:13:05.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:05.365 "is_configured": true, 00:13:05.365 "data_offset": 2048, 00:13:05.366 "data_size": 63488 00:13:05.366 }, 00:13:05.366 { 00:13:05.366 "name": null, 00:13:05.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.366 "is_configured": false, 00:13:05.366 "data_offset": 2048, 00:13:05.366 "data_size": 63488 00:13:05.366 }, 00:13:05.366 { 00:13:05.366 "name": null, 00:13:05.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.366 "is_configured": false, 00:13:05.366 "data_offset": 2048, 00:13:05.366 "data_size": 63488 00:13:05.366 } 00:13:05.366 ] 00:13:05.366 }' 00:13:05.366 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.366 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.935 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:05.935 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:05.935 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.935 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.935 [2024-11-02 23:52:59.810532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:05.935 [2024-11-02 23:52:59.810638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.935 [2024-11-02 23:52:59.810674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:05.935 [2024-11-02 23:52:59.810704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.935 [2024-11-02 23:52:59.811065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.935 [2024-11-02 23:52:59.811119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:05.935 [2024-11-02 23:52:59.811206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:05.935 [2024-11-02 23:52:59.811252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:05.935 pt2 00:13:05.935 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.935 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:05.935 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.935 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.936 [2024-11-02 23:52:59.822540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.936 "name": "raid_bdev1", 00:13:05.936 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:05.936 "strip_size_kb": 64, 00:13:05.936 "state": "configuring", 00:13:05.936 "raid_level": "raid5f", 00:13:05.936 "superblock": true, 00:13:05.936 "num_base_bdevs": 3, 00:13:05.936 "num_base_bdevs_discovered": 1, 00:13:05.936 "num_base_bdevs_operational": 3, 00:13:05.936 "base_bdevs_list": [ 00:13:05.936 { 00:13:05.936 "name": "pt1", 00:13:05.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:05.936 "is_configured": true, 00:13:05.936 "data_offset": 2048, 00:13:05.936 "data_size": 63488 00:13:05.936 }, 00:13:05.936 { 00:13:05.936 "name": null, 00:13:05.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.936 "is_configured": false, 00:13:05.936 "data_offset": 0, 00:13:05.936 "data_size": 63488 00:13:05.936 }, 00:13:05.936 { 00:13:05.936 "name": null, 00:13:05.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.936 "is_configured": false, 00:13:05.936 "data_offset": 2048, 00:13:05.936 "data_size": 63488 00:13:05.936 } 00:13:05.936 ] 00:13:05.936 }' 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.936 23:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.195 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:06.195 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:06.195 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:06.195 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.195 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.195 [2024-11-02 23:53:00.281767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:06.195 [2024-11-02 23:53:00.281857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.196 [2024-11-02 23:53:00.281889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:06.196 [2024-11-02 23:53:00.281914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.196 [2024-11-02 23:53:00.282274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.196 [2024-11-02 23:53:00.282325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:06.196 [2024-11-02 23:53:00.282393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:06.196 [2024-11-02 23:53:00.282423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:06.196 pt2 00:13:06.196 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.196 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:06.196 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:06.196 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:06.196 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.196 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.456 [2024-11-02 23:53:00.289764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:06.456 [2024-11-02 23:53:00.289833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.456 [2024-11-02 23:53:00.289868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:06.456 [2024-11-02 23:53:00.289893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.456 [2024-11-02 23:53:00.290226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.456 [2024-11-02 23:53:00.290276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:06.456 [2024-11-02 23:53:00.290350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:06.456 [2024-11-02 23:53:00.290391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:06.456 [2024-11-02 23:53:00.290531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:06.456 [2024-11-02 23:53:00.290583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:06.456 [2024-11-02 23:53:00.290823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:06.456 [2024-11-02 23:53:00.291194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:06.456 [2024-11-02 23:53:00.291207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:13:06.456 [2024-11-02 23:53:00.291305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.456 pt3 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.456 "name": "raid_bdev1", 00:13:06.456 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:06.456 "strip_size_kb": 64, 00:13:06.456 "state": "online", 00:13:06.456 "raid_level": "raid5f", 00:13:06.456 "superblock": true, 00:13:06.456 "num_base_bdevs": 3, 00:13:06.456 "num_base_bdevs_discovered": 3, 00:13:06.456 "num_base_bdevs_operational": 3, 00:13:06.456 "base_bdevs_list": [ 00:13:06.456 { 00:13:06.456 "name": "pt1", 00:13:06.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.456 "is_configured": true, 00:13:06.456 "data_offset": 2048, 00:13:06.456 "data_size": 63488 00:13:06.456 }, 00:13:06.456 { 00:13:06.456 "name": "pt2", 00:13:06.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.456 "is_configured": true, 00:13:06.456 "data_offset": 2048, 00:13:06.456 "data_size": 63488 00:13:06.456 }, 00:13:06.456 { 00:13:06.456 "name": "pt3", 00:13:06.456 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.456 "is_configured": true, 00:13:06.456 "data_offset": 2048, 00:13:06.456 "data_size": 63488 00:13:06.456 } 00:13:06.456 ] 00:13:06.456 }' 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.456 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.716 [2024-11-02 23:53:00.737151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.716 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:06.716 "name": "raid_bdev1", 00:13:06.716 "aliases": [ 00:13:06.716 "16b60026-7d86-46f4-9252-d74f423f7d17" 00:13:06.716 ], 00:13:06.716 "product_name": "Raid Volume", 00:13:06.716 "block_size": 512, 00:13:06.716 "num_blocks": 126976, 00:13:06.716 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:06.716 "assigned_rate_limits": { 00:13:06.716 "rw_ios_per_sec": 0, 00:13:06.716 "rw_mbytes_per_sec": 0, 00:13:06.716 "r_mbytes_per_sec": 0, 00:13:06.716 "w_mbytes_per_sec": 0 00:13:06.716 }, 00:13:06.716 "claimed": false, 00:13:06.716 "zoned": false, 00:13:06.716 "supported_io_types": { 00:13:06.716 "read": true, 00:13:06.716 "write": true, 00:13:06.716 "unmap": false, 00:13:06.716 "flush": false, 00:13:06.716 "reset": true, 00:13:06.716 "nvme_admin": false, 00:13:06.716 "nvme_io": false, 00:13:06.716 "nvme_io_md": false, 00:13:06.716 "write_zeroes": true, 00:13:06.716 "zcopy": false, 00:13:06.716 "get_zone_info": false, 00:13:06.716 "zone_management": false, 00:13:06.716 "zone_append": false, 00:13:06.716 "compare": false, 00:13:06.716 "compare_and_write": false, 00:13:06.716 "abort": false, 00:13:06.716 "seek_hole": false, 00:13:06.716 "seek_data": false, 00:13:06.716 "copy": false, 00:13:06.716 "nvme_iov_md": false 00:13:06.716 }, 00:13:06.716 "driver_specific": { 00:13:06.716 "raid": { 00:13:06.716 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:06.716 "strip_size_kb": 64, 00:13:06.716 "state": "online", 00:13:06.716 "raid_level": "raid5f", 00:13:06.716 "superblock": true, 00:13:06.716 "num_base_bdevs": 3, 00:13:06.716 "num_base_bdevs_discovered": 3, 00:13:06.716 "num_base_bdevs_operational": 3, 00:13:06.716 "base_bdevs_list": [ 00:13:06.716 { 00:13:06.716 "name": "pt1", 00:13:06.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.716 "is_configured": true, 00:13:06.716 "data_offset": 2048, 00:13:06.716 "data_size": 63488 00:13:06.716 }, 00:13:06.716 { 00:13:06.717 "name": "pt2", 00:13:06.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.717 "is_configured": true, 00:13:06.717 "data_offset": 2048, 00:13:06.717 "data_size": 63488 00:13:06.717 }, 00:13:06.717 { 00:13:06.717 "name": "pt3", 00:13:06.717 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.717 "is_configured": true, 00:13:06.717 "data_offset": 2048, 00:13:06.717 "data_size": 63488 00:13:06.717 } 00:13:06.717 ] 00:13:06.717 } 00:13:06.717 } 00:13:06.717 }' 00:13:06.717 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:06.976 pt2 00:13:06.976 pt3' 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.976 23:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.976 [2024-11-02 23:53:01.024617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 16b60026-7d86-46f4-9252-d74f423f7d17 '!=' 16b60026-7d86-46f4-9252-d74f423f7d17 ']' 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.976 [2024-11-02 23:53:01.064444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.976 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.237 "name": "raid_bdev1", 00:13:07.237 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:07.237 "strip_size_kb": 64, 00:13:07.237 "state": "online", 00:13:07.237 "raid_level": "raid5f", 00:13:07.237 "superblock": true, 00:13:07.237 "num_base_bdevs": 3, 00:13:07.237 "num_base_bdevs_discovered": 2, 00:13:07.237 "num_base_bdevs_operational": 2, 00:13:07.237 "base_bdevs_list": [ 00:13:07.237 { 00:13:07.237 "name": null, 00:13:07.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.237 "is_configured": false, 00:13:07.237 "data_offset": 0, 00:13:07.237 "data_size": 63488 00:13:07.237 }, 00:13:07.237 { 00:13:07.237 "name": "pt2", 00:13:07.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.237 "is_configured": true, 00:13:07.237 "data_offset": 2048, 00:13:07.237 "data_size": 63488 00:13:07.237 }, 00:13:07.237 { 00:13:07.237 "name": "pt3", 00:13:07.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.237 "is_configured": true, 00:13:07.237 "data_offset": 2048, 00:13:07.237 "data_size": 63488 00:13:07.237 } 00:13:07.237 ] 00:13:07.237 }' 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.237 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.496 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.496 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.496 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.496 [2024-11-02 23:53:01.555572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.496 [2024-11-02 23:53:01.555636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.496 [2024-11-02 23:53:01.555708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.497 [2024-11-02 23:53:01.555782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.497 [2024-11-02 23:53:01.555829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:13:07.497 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.497 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.497 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.497 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.497 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:07.497 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:07.756 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.757 [2024-11-02 23:53:01.639420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:07.757 [2024-11-02 23:53:01.639497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.757 [2024-11-02 23:53:01.639545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:07.757 [2024-11-02 23:53:01.639571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.757 [2024-11-02 23:53:01.641642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.757 [2024-11-02 23:53:01.641725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:07.757 [2024-11-02 23:53:01.641818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:07.757 [2024-11-02 23:53:01.641888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:07.757 pt2 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.757 "name": "raid_bdev1", 00:13:07.757 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:07.757 "strip_size_kb": 64, 00:13:07.757 "state": "configuring", 00:13:07.757 "raid_level": "raid5f", 00:13:07.757 "superblock": true, 00:13:07.757 "num_base_bdevs": 3, 00:13:07.757 "num_base_bdevs_discovered": 1, 00:13:07.757 "num_base_bdevs_operational": 2, 00:13:07.757 "base_bdevs_list": [ 00:13:07.757 { 00:13:07.757 "name": null, 00:13:07.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.757 "is_configured": false, 00:13:07.757 "data_offset": 2048, 00:13:07.757 "data_size": 63488 00:13:07.757 }, 00:13:07.757 { 00:13:07.757 "name": "pt2", 00:13:07.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.757 "is_configured": true, 00:13:07.757 "data_offset": 2048, 00:13:07.757 "data_size": 63488 00:13:07.757 }, 00:13:07.757 { 00:13:07.757 "name": null, 00:13:07.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.757 "is_configured": false, 00:13:07.757 "data_offset": 2048, 00:13:07.757 "data_size": 63488 00:13:07.757 } 00:13:07.757 ] 00:13:07.757 }' 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.757 23:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.017 [2024-11-02 23:53:02.058715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:08.017 [2024-11-02 23:53:02.058806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.017 [2024-11-02 23:53:02.058857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:08.017 [2024-11-02 23:53:02.058884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.017 [2024-11-02 23:53:02.059230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.017 [2024-11-02 23:53:02.059281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:08.017 [2024-11-02 23:53:02.059368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:08.017 [2024-11-02 23:53:02.059412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:08.017 [2024-11-02 23:53:02.059525] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:08.017 [2024-11-02 23:53:02.059560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:08.017 [2024-11-02 23:53:02.059805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:08.017 [2024-11-02 23:53:02.060278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:08.017 [2024-11-02 23:53:02.060327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:13:08.017 [2024-11-02 23:53:02.060577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.017 pt3 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.017 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.018 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.277 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.277 "name": "raid_bdev1", 00:13:08.277 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:08.277 "strip_size_kb": 64, 00:13:08.277 "state": "online", 00:13:08.277 "raid_level": "raid5f", 00:13:08.277 "superblock": true, 00:13:08.277 "num_base_bdevs": 3, 00:13:08.277 "num_base_bdevs_discovered": 2, 00:13:08.277 "num_base_bdevs_operational": 2, 00:13:08.277 "base_bdevs_list": [ 00:13:08.277 { 00:13:08.277 "name": null, 00:13:08.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.277 "is_configured": false, 00:13:08.277 "data_offset": 2048, 00:13:08.277 "data_size": 63488 00:13:08.277 }, 00:13:08.277 { 00:13:08.277 "name": "pt2", 00:13:08.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.277 "is_configured": true, 00:13:08.277 "data_offset": 2048, 00:13:08.277 "data_size": 63488 00:13:08.277 }, 00:13:08.277 { 00:13:08.277 "name": "pt3", 00:13:08.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.277 "is_configured": true, 00:13:08.277 "data_offset": 2048, 00:13:08.277 "data_size": 63488 00:13:08.277 } 00:13:08.277 ] 00:13:08.277 }' 00:13:08.277 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.277 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.537 [2024-11-02 23:53:02.553920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:08.537 [2024-11-02 23:53:02.553980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.537 [2024-11-02 23:53:02.554077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.537 [2024-11-02 23:53:02.554147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.537 [2024-11-02 23:53:02.554196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.537 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.537 [2024-11-02 23:53:02.613833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:08.537 [2024-11-02 23:53:02.613932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.537 [2024-11-02 23:53:02.613964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:08.537 [2024-11-02 23:53:02.613993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.537 [2024-11-02 23:53:02.616183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.538 [2024-11-02 23:53:02.616268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:08.538 [2024-11-02 23:53:02.616354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:08.538 [2024-11-02 23:53:02.616423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:08.538 [2024-11-02 23:53:02.616554] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:08.538 [2024-11-02 23:53:02.616609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:08.538 [2024-11-02 23:53:02.616705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:13:08.538 [2024-11-02 23:53:02.616789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:08.538 pt1 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.538 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.797 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.797 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.797 "name": "raid_bdev1", 00:13:08.797 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:08.797 "strip_size_kb": 64, 00:13:08.797 "state": "configuring", 00:13:08.797 "raid_level": "raid5f", 00:13:08.797 "superblock": true, 00:13:08.797 "num_base_bdevs": 3, 00:13:08.797 "num_base_bdevs_discovered": 1, 00:13:08.797 "num_base_bdevs_operational": 2, 00:13:08.797 "base_bdevs_list": [ 00:13:08.797 { 00:13:08.797 "name": null, 00:13:08.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.797 "is_configured": false, 00:13:08.797 "data_offset": 2048, 00:13:08.797 "data_size": 63488 00:13:08.797 }, 00:13:08.797 { 00:13:08.797 "name": "pt2", 00:13:08.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.797 "is_configured": true, 00:13:08.797 "data_offset": 2048, 00:13:08.797 "data_size": 63488 00:13:08.797 }, 00:13:08.797 { 00:13:08.797 "name": null, 00:13:08.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.797 "is_configured": false, 00:13:08.797 "data_offset": 2048, 00:13:08.797 "data_size": 63488 00:13:08.797 } 00:13:08.797 ] 00:13:08.797 }' 00:13:08.797 23:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.797 23:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.057 [2024-11-02 23:53:03.073032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:09.057 [2024-11-02 23:53:03.073081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.057 [2024-11-02 23:53:03.073111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:09.057 [2024-11-02 23:53:03.073122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.057 [2024-11-02 23:53:03.073464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.057 [2024-11-02 23:53:03.073485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:09.057 [2024-11-02 23:53:03.073547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:09.057 [2024-11-02 23:53:03.073569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:09.057 [2024-11-02 23:53:03.073648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:13:09.057 [2024-11-02 23:53:03.073669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:09.057 [2024-11-02 23:53:03.073898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:09.057 [2024-11-02 23:53:03.074320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:13:09.057 [2024-11-02 23:53:03.074331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:13:09.057 [2024-11-02 23:53:03.074495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.057 pt3 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.057 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.058 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.058 "name": "raid_bdev1", 00:13:09.058 "uuid": "16b60026-7d86-46f4-9252-d74f423f7d17", 00:13:09.058 "strip_size_kb": 64, 00:13:09.058 "state": "online", 00:13:09.058 "raid_level": "raid5f", 00:13:09.058 "superblock": true, 00:13:09.058 "num_base_bdevs": 3, 00:13:09.058 "num_base_bdevs_discovered": 2, 00:13:09.058 "num_base_bdevs_operational": 2, 00:13:09.058 "base_bdevs_list": [ 00:13:09.058 { 00:13:09.058 "name": null, 00:13:09.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.058 "is_configured": false, 00:13:09.058 "data_offset": 2048, 00:13:09.058 "data_size": 63488 00:13:09.058 }, 00:13:09.058 { 00:13:09.058 "name": "pt2", 00:13:09.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:09.058 "is_configured": true, 00:13:09.058 "data_offset": 2048, 00:13:09.058 "data_size": 63488 00:13:09.058 }, 00:13:09.058 { 00:13:09.058 "name": "pt3", 00:13:09.058 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:09.058 "is_configured": true, 00:13:09.058 "data_offset": 2048, 00:13:09.058 "data_size": 63488 00:13:09.058 } 00:13:09.058 ] 00:13:09.058 }' 00:13:09.058 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.058 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.628 [2024-11-02 23:53:03.568396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 16b60026-7d86-46f4-9252-d74f423f7d17 '!=' 16b60026-7d86-46f4-9252-d74f423f7d17 ']' 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91458 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 91458 ']' 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 91458 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91458 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91458' 00:13:09.628 killing process with pid 91458 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 91458 00:13:09.628 [2024-11-02 23:53:03.652330] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.628 [2024-11-02 23:53:03.652401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.628 [2024-11-02 23:53:03.652456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.628 [2024-11-02 23:53:03.652465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:13:09.628 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 91458 00:13:09.628 [2024-11-02 23:53:03.684138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.888 23:53:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:09.888 00:13:09.888 real 0m6.478s 00:13:09.888 user 0m10.878s 00:13:09.888 sys 0m1.389s 00:13:09.888 ************************************ 00:13:09.888 END TEST raid5f_superblock_test 00:13:09.888 ************************************ 00:13:09.888 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:09.888 23:53:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.888 23:53:03 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:09.888 23:53:03 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:09.888 23:53:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:09.888 23:53:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:09.888 23:53:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.888 ************************************ 00:13:09.888 START TEST raid5f_rebuild_test 00:13:09.888 ************************************ 00:13:09.888 23:53:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:13:09.888 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:09.888 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:09.888 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:09.888 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:09.888 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:09.888 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=91885 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 91885 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 91885 ']' 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:10.148 23:53:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.148 [2024-11-02 23:53:04.080215] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:13:10.148 [2024-11-02 23:53:04.080401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91885 ] 00:13:10.148 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:10.148 Zero copy mechanism will not be used. 00:13:10.148 [2024-11-02 23:53:04.235219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.408 [2024-11-02 23:53:04.260490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.408 [2024-11-02 23:53:04.301761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.408 [2024-11-02 23:53:04.301869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 BaseBdev1_malloc 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 [2024-11-02 23:53:04.903156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:10.978 [2024-11-02 23:53:04.903233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.978 [2024-11-02 23:53:04.903258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:10.978 [2024-11-02 23:53:04.903272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.978 [2024-11-02 23:53:04.905344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.978 [2024-11-02 23:53:04.905384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:10.978 BaseBdev1 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 BaseBdev2_malloc 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 [2024-11-02 23:53:04.931561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:10.978 [2024-11-02 23:53:04.931609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.978 [2024-11-02 23:53:04.931644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:10.978 [2024-11-02 23:53:04.931652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.978 [2024-11-02 23:53:04.933683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.978 [2024-11-02 23:53:04.933723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:10.978 BaseBdev2 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 BaseBdev3_malloc 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 [2024-11-02 23:53:04.959800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:10.978 [2024-11-02 23:53:04.959846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.978 [2024-11-02 23:53:04.959886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:10.978 [2024-11-02 23:53:04.959895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.978 [2024-11-02 23:53:04.961832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.978 [2024-11-02 23:53:04.961924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:10.978 BaseBdev3 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 spare_malloc 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.978 23:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 spare_delay 00:13:10.978 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.978 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.978 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.978 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 [2024-11-02 23:53:05.009254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.979 [2024-11-02 23:53:05.009306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.979 [2024-11-02 23:53:05.009332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:10.979 [2024-11-02 23:53:05.009341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.979 [2024-11-02 23:53:05.011399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.979 [2024-11-02 23:53:05.011434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.979 spare 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.979 [2024-11-02 23:53:05.021291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.979 [2024-11-02 23:53:05.023068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.979 [2024-11-02 23:53:05.023127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:10.979 [2024-11-02 23:53:05.023199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:10.979 [2024-11-02 23:53:05.023209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:10.979 [2024-11-02 23:53:05.023456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:10.979 [2024-11-02 23:53:05.023874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:10.979 [2024-11-02 23:53:05.023886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:10.979 [2024-11-02 23:53:05.023994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.979 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.238 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.238 "name": "raid_bdev1", 00:13:11.238 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:11.238 "strip_size_kb": 64, 00:13:11.238 "state": "online", 00:13:11.238 "raid_level": "raid5f", 00:13:11.238 "superblock": false, 00:13:11.238 "num_base_bdevs": 3, 00:13:11.238 "num_base_bdevs_discovered": 3, 00:13:11.238 "num_base_bdevs_operational": 3, 00:13:11.238 "base_bdevs_list": [ 00:13:11.238 { 00:13:11.238 "name": "BaseBdev1", 00:13:11.238 "uuid": "61d5f0b1-daad-54db-be5f-714b65686e2c", 00:13:11.238 "is_configured": true, 00:13:11.238 "data_offset": 0, 00:13:11.238 "data_size": 65536 00:13:11.238 }, 00:13:11.238 { 00:13:11.238 "name": "BaseBdev2", 00:13:11.238 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:11.238 "is_configured": true, 00:13:11.239 "data_offset": 0, 00:13:11.239 "data_size": 65536 00:13:11.239 }, 00:13:11.239 { 00:13:11.239 "name": "BaseBdev3", 00:13:11.239 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:11.239 "is_configured": true, 00:13:11.239 "data_offset": 0, 00:13:11.239 "data_size": 65536 00:13:11.239 } 00:13:11.239 ] 00:13:11.239 }' 00:13:11.239 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.239 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.507 [2024-11-02 23:53:05.476729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.507 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:11.795 [2024-11-02 23:53:05.744202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:11.795 /dev/nbd0 00:13:11.795 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.795 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.795 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:11.795 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:11.795 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:11.795 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:11.795 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:11.795 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:11.795 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.796 1+0 records in 00:13:11.796 1+0 records out 00:13:11.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283454 s, 14.5 MB/s 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:11.796 23:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:12.055 512+0 records in 00:13:12.055 512+0 records out 00:13:12.055 67108864 bytes (67 MB, 64 MiB) copied, 0.283133 s, 237 MB/s 00:13:12.055 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:12.055 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.055 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:12.055 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.055 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:12.055 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.055 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.315 [2024-11-02 23:53:06.310814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.315 [2024-11-02 23:53:06.326882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.315 "name": "raid_bdev1", 00:13:12.315 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:12.315 "strip_size_kb": 64, 00:13:12.315 "state": "online", 00:13:12.315 "raid_level": "raid5f", 00:13:12.315 "superblock": false, 00:13:12.315 "num_base_bdevs": 3, 00:13:12.315 "num_base_bdevs_discovered": 2, 00:13:12.315 "num_base_bdevs_operational": 2, 00:13:12.315 "base_bdevs_list": [ 00:13:12.315 { 00:13:12.315 "name": null, 00:13:12.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.315 "is_configured": false, 00:13:12.315 "data_offset": 0, 00:13:12.315 "data_size": 65536 00:13:12.315 }, 00:13:12.315 { 00:13:12.315 "name": "BaseBdev2", 00:13:12.315 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:12.315 "is_configured": true, 00:13:12.315 "data_offset": 0, 00:13:12.315 "data_size": 65536 00:13:12.315 }, 00:13:12.315 { 00:13:12.315 "name": "BaseBdev3", 00:13:12.315 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:12.315 "is_configured": true, 00:13:12.315 "data_offset": 0, 00:13:12.315 "data_size": 65536 00:13:12.315 } 00:13:12.315 ] 00:13:12.315 }' 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.315 23:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.884 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.884 23:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.884 23:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.884 [2024-11-02 23:53:06.810042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.885 [2024-11-02 23:53:06.814608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:13:12.885 23:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.885 23:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:12.885 [2024-11-02 23:53:06.816878] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.824 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.824 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.824 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.824 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.824 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.825 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.825 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.825 23:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.825 23:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.825 23:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.825 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.825 "name": "raid_bdev1", 00:13:13.825 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:13.825 "strip_size_kb": 64, 00:13:13.825 "state": "online", 00:13:13.825 "raid_level": "raid5f", 00:13:13.825 "superblock": false, 00:13:13.825 "num_base_bdevs": 3, 00:13:13.825 "num_base_bdevs_discovered": 3, 00:13:13.825 "num_base_bdevs_operational": 3, 00:13:13.825 "process": { 00:13:13.825 "type": "rebuild", 00:13:13.825 "target": "spare", 00:13:13.825 "progress": { 00:13:13.825 "blocks": 20480, 00:13:13.825 "percent": 15 00:13:13.825 } 00:13:13.825 }, 00:13:13.825 "base_bdevs_list": [ 00:13:13.825 { 00:13:13.825 "name": "spare", 00:13:13.825 "uuid": "8efda872-2035-552a-a019-ef9db429a061", 00:13:13.825 "is_configured": true, 00:13:13.825 "data_offset": 0, 00:13:13.825 "data_size": 65536 00:13:13.825 }, 00:13:13.825 { 00:13:13.825 "name": "BaseBdev2", 00:13:13.825 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:13.825 "is_configured": true, 00:13:13.825 "data_offset": 0, 00:13:13.825 "data_size": 65536 00:13:13.825 }, 00:13:13.825 { 00:13:13.825 "name": "BaseBdev3", 00:13:13.825 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:13.825 "is_configured": true, 00:13:13.825 "data_offset": 0, 00:13:13.825 "data_size": 65536 00:13:13.825 } 00:13:13.825 ] 00:13:13.825 }' 00:13:13.825 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.085 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.085 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.085 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.085 23:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:14.085 23:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.085 23:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.085 [2024-11-02 23:53:07.980370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.085 [2024-11-02 23:53:08.023670] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.085 [2024-11-02 23:53:08.023732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.085 [2024-11-02 23:53:08.023762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.085 [2024-11-02 23:53:08.023797] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.085 "name": "raid_bdev1", 00:13:14.085 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:14.085 "strip_size_kb": 64, 00:13:14.085 "state": "online", 00:13:14.085 "raid_level": "raid5f", 00:13:14.085 "superblock": false, 00:13:14.085 "num_base_bdevs": 3, 00:13:14.085 "num_base_bdevs_discovered": 2, 00:13:14.085 "num_base_bdevs_operational": 2, 00:13:14.085 "base_bdevs_list": [ 00:13:14.085 { 00:13:14.085 "name": null, 00:13:14.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.085 "is_configured": false, 00:13:14.085 "data_offset": 0, 00:13:14.085 "data_size": 65536 00:13:14.085 }, 00:13:14.085 { 00:13:14.085 "name": "BaseBdev2", 00:13:14.085 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:14.085 "is_configured": true, 00:13:14.085 "data_offset": 0, 00:13:14.085 "data_size": 65536 00:13:14.085 }, 00:13:14.085 { 00:13:14.085 "name": "BaseBdev3", 00:13:14.085 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:14.085 "is_configured": true, 00:13:14.085 "data_offset": 0, 00:13:14.085 "data_size": 65536 00:13:14.085 } 00:13:14.085 ] 00:13:14.085 }' 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.085 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.656 "name": "raid_bdev1", 00:13:14.656 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:14.656 "strip_size_kb": 64, 00:13:14.656 "state": "online", 00:13:14.656 "raid_level": "raid5f", 00:13:14.656 "superblock": false, 00:13:14.656 "num_base_bdevs": 3, 00:13:14.656 "num_base_bdevs_discovered": 2, 00:13:14.656 "num_base_bdevs_operational": 2, 00:13:14.656 "base_bdevs_list": [ 00:13:14.656 { 00:13:14.656 "name": null, 00:13:14.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.656 "is_configured": false, 00:13:14.656 "data_offset": 0, 00:13:14.656 "data_size": 65536 00:13:14.656 }, 00:13:14.656 { 00:13:14.656 "name": "BaseBdev2", 00:13:14.656 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:14.656 "is_configured": true, 00:13:14.656 "data_offset": 0, 00:13:14.656 "data_size": 65536 00:13:14.656 }, 00:13:14.656 { 00:13:14.656 "name": "BaseBdev3", 00:13:14.656 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:14.656 "is_configured": true, 00:13:14.656 "data_offset": 0, 00:13:14.656 "data_size": 65536 00:13:14.656 } 00:13:14.656 ] 00:13:14.656 }' 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.656 [2024-11-02 23:53:08.616471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.656 [2024-11-02 23:53:08.620624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.656 23:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:14.656 [2024-11-02 23:53:08.622770] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.596 "name": "raid_bdev1", 00:13:15.596 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:15.596 "strip_size_kb": 64, 00:13:15.596 "state": "online", 00:13:15.596 "raid_level": "raid5f", 00:13:15.596 "superblock": false, 00:13:15.596 "num_base_bdevs": 3, 00:13:15.596 "num_base_bdevs_discovered": 3, 00:13:15.596 "num_base_bdevs_operational": 3, 00:13:15.596 "process": { 00:13:15.596 "type": "rebuild", 00:13:15.596 "target": "spare", 00:13:15.596 "progress": { 00:13:15.596 "blocks": 20480, 00:13:15.596 "percent": 15 00:13:15.596 } 00:13:15.596 }, 00:13:15.596 "base_bdevs_list": [ 00:13:15.596 { 00:13:15.596 "name": "spare", 00:13:15.596 "uuid": "8efda872-2035-552a-a019-ef9db429a061", 00:13:15.596 "is_configured": true, 00:13:15.596 "data_offset": 0, 00:13:15.596 "data_size": 65536 00:13:15.596 }, 00:13:15.596 { 00:13:15.596 "name": "BaseBdev2", 00:13:15.596 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:15.596 "is_configured": true, 00:13:15.596 "data_offset": 0, 00:13:15.596 "data_size": 65536 00:13:15.596 }, 00:13:15.596 { 00:13:15.596 "name": "BaseBdev3", 00:13:15.596 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:15.596 "is_configured": true, 00:13:15.596 "data_offset": 0, 00:13:15.596 "data_size": 65536 00:13:15.596 } 00:13:15.596 ] 00:13:15.596 }' 00:13:15.596 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.855 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.856 "name": "raid_bdev1", 00:13:15.856 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:15.856 "strip_size_kb": 64, 00:13:15.856 "state": "online", 00:13:15.856 "raid_level": "raid5f", 00:13:15.856 "superblock": false, 00:13:15.856 "num_base_bdevs": 3, 00:13:15.856 "num_base_bdevs_discovered": 3, 00:13:15.856 "num_base_bdevs_operational": 3, 00:13:15.856 "process": { 00:13:15.856 "type": "rebuild", 00:13:15.856 "target": "spare", 00:13:15.856 "progress": { 00:13:15.856 "blocks": 22528, 00:13:15.856 "percent": 17 00:13:15.856 } 00:13:15.856 }, 00:13:15.856 "base_bdevs_list": [ 00:13:15.856 { 00:13:15.856 "name": "spare", 00:13:15.856 "uuid": "8efda872-2035-552a-a019-ef9db429a061", 00:13:15.856 "is_configured": true, 00:13:15.856 "data_offset": 0, 00:13:15.856 "data_size": 65536 00:13:15.856 }, 00:13:15.856 { 00:13:15.856 "name": "BaseBdev2", 00:13:15.856 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:15.856 "is_configured": true, 00:13:15.856 "data_offset": 0, 00:13:15.856 "data_size": 65536 00:13:15.856 }, 00:13:15.856 { 00:13:15.856 "name": "BaseBdev3", 00:13:15.856 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:15.856 "is_configured": true, 00:13:15.856 "data_offset": 0, 00:13:15.856 "data_size": 65536 00:13:15.856 } 00:13:15.856 ] 00:13:15.856 }' 00:13:15.856 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.856 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.856 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.856 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.856 23:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.254 "name": "raid_bdev1", 00:13:17.254 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:17.254 "strip_size_kb": 64, 00:13:17.254 "state": "online", 00:13:17.254 "raid_level": "raid5f", 00:13:17.254 "superblock": false, 00:13:17.254 "num_base_bdevs": 3, 00:13:17.254 "num_base_bdevs_discovered": 3, 00:13:17.254 "num_base_bdevs_operational": 3, 00:13:17.254 "process": { 00:13:17.254 "type": "rebuild", 00:13:17.254 "target": "spare", 00:13:17.254 "progress": { 00:13:17.254 "blocks": 47104, 00:13:17.254 "percent": 35 00:13:17.254 } 00:13:17.254 }, 00:13:17.254 "base_bdevs_list": [ 00:13:17.254 { 00:13:17.254 "name": "spare", 00:13:17.254 "uuid": "8efda872-2035-552a-a019-ef9db429a061", 00:13:17.254 "is_configured": true, 00:13:17.254 "data_offset": 0, 00:13:17.254 "data_size": 65536 00:13:17.254 }, 00:13:17.254 { 00:13:17.254 "name": "BaseBdev2", 00:13:17.254 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:17.254 "is_configured": true, 00:13:17.254 "data_offset": 0, 00:13:17.254 "data_size": 65536 00:13:17.254 }, 00:13:17.254 { 00:13:17.254 "name": "BaseBdev3", 00:13:17.254 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:17.254 "is_configured": true, 00:13:17.254 "data_offset": 0, 00:13:17.254 "data_size": 65536 00:13:17.254 } 00:13:17.254 ] 00:13:17.254 }' 00:13:17.254 23:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.254 23:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.254 23:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.254 23:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.254 23:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:18.193 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:18.193 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.194 "name": "raid_bdev1", 00:13:18.194 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:18.194 "strip_size_kb": 64, 00:13:18.194 "state": "online", 00:13:18.194 "raid_level": "raid5f", 00:13:18.194 "superblock": false, 00:13:18.194 "num_base_bdevs": 3, 00:13:18.194 "num_base_bdevs_discovered": 3, 00:13:18.194 "num_base_bdevs_operational": 3, 00:13:18.194 "process": { 00:13:18.194 "type": "rebuild", 00:13:18.194 "target": "spare", 00:13:18.194 "progress": { 00:13:18.194 "blocks": 69632, 00:13:18.194 "percent": 53 00:13:18.194 } 00:13:18.194 }, 00:13:18.194 "base_bdevs_list": [ 00:13:18.194 { 00:13:18.194 "name": "spare", 00:13:18.194 "uuid": "8efda872-2035-552a-a019-ef9db429a061", 00:13:18.194 "is_configured": true, 00:13:18.194 "data_offset": 0, 00:13:18.194 "data_size": 65536 00:13:18.194 }, 00:13:18.194 { 00:13:18.194 "name": "BaseBdev2", 00:13:18.194 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:18.194 "is_configured": true, 00:13:18.194 "data_offset": 0, 00:13:18.194 "data_size": 65536 00:13:18.194 }, 00:13:18.194 { 00:13:18.194 "name": "BaseBdev3", 00:13:18.194 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:18.194 "is_configured": true, 00:13:18.194 "data_offset": 0, 00:13:18.194 "data_size": 65536 00:13:18.194 } 00:13:18.194 ] 00:13:18.194 }' 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.194 23:53:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:19.133 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.133 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.133 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.133 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.133 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.133 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.133 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.133 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.133 23:53:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.133 23:53:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.393 23:53:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.393 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.393 "name": "raid_bdev1", 00:13:19.393 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:19.393 "strip_size_kb": 64, 00:13:19.393 "state": "online", 00:13:19.393 "raid_level": "raid5f", 00:13:19.393 "superblock": false, 00:13:19.393 "num_base_bdevs": 3, 00:13:19.393 "num_base_bdevs_discovered": 3, 00:13:19.393 "num_base_bdevs_operational": 3, 00:13:19.393 "process": { 00:13:19.393 "type": "rebuild", 00:13:19.393 "target": "spare", 00:13:19.393 "progress": { 00:13:19.393 "blocks": 92160, 00:13:19.393 "percent": 70 00:13:19.393 } 00:13:19.393 }, 00:13:19.393 "base_bdevs_list": [ 00:13:19.393 { 00:13:19.393 "name": "spare", 00:13:19.393 "uuid": "8efda872-2035-552a-a019-ef9db429a061", 00:13:19.393 "is_configured": true, 00:13:19.393 "data_offset": 0, 00:13:19.393 "data_size": 65536 00:13:19.393 }, 00:13:19.393 { 00:13:19.393 "name": "BaseBdev2", 00:13:19.393 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:19.393 "is_configured": true, 00:13:19.393 "data_offset": 0, 00:13:19.393 "data_size": 65536 00:13:19.393 }, 00:13:19.393 { 00:13:19.393 "name": "BaseBdev3", 00:13:19.393 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:19.393 "is_configured": true, 00:13:19.393 "data_offset": 0, 00:13:19.393 "data_size": 65536 00:13:19.393 } 00:13:19.393 ] 00:13:19.393 }' 00:13:19.393 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.393 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.393 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.393 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.393 23:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.333 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.333 "name": "raid_bdev1", 00:13:20.333 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:20.333 "strip_size_kb": 64, 00:13:20.333 "state": "online", 00:13:20.333 "raid_level": "raid5f", 00:13:20.333 "superblock": false, 00:13:20.333 "num_base_bdevs": 3, 00:13:20.333 "num_base_bdevs_discovered": 3, 00:13:20.333 "num_base_bdevs_operational": 3, 00:13:20.333 "process": { 00:13:20.333 "type": "rebuild", 00:13:20.333 "target": "spare", 00:13:20.333 "progress": { 00:13:20.333 "blocks": 116736, 00:13:20.333 "percent": 89 00:13:20.333 } 00:13:20.333 }, 00:13:20.333 "base_bdevs_list": [ 00:13:20.333 { 00:13:20.333 "name": "spare", 00:13:20.333 "uuid": "8efda872-2035-552a-a019-ef9db429a061", 00:13:20.333 "is_configured": true, 00:13:20.333 "data_offset": 0, 00:13:20.333 "data_size": 65536 00:13:20.333 }, 00:13:20.333 { 00:13:20.333 "name": "BaseBdev2", 00:13:20.333 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:20.333 "is_configured": true, 00:13:20.333 "data_offset": 0, 00:13:20.333 "data_size": 65536 00:13:20.333 }, 00:13:20.333 { 00:13:20.333 "name": "BaseBdev3", 00:13:20.334 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:20.334 "is_configured": true, 00:13:20.334 "data_offset": 0, 00:13:20.334 "data_size": 65536 00:13:20.334 } 00:13:20.334 ] 00:13:20.334 }' 00:13:20.334 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.592 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.592 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.592 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.592 23:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.174 [2024-11-02 23:53:15.055530] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.174 [2024-11-02 23:53:15.055602] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.174 [2024-11-02 23:53:15.055670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.435 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.435 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.435 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.435 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.435 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.436 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.436 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.436 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.436 23:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.436 23:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.695 23:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.695 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.695 "name": "raid_bdev1", 00:13:21.695 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:21.696 "strip_size_kb": 64, 00:13:21.696 "state": "online", 00:13:21.696 "raid_level": "raid5f", 00:13:21.696 "superblock": false, 00:13:21.696 "num_base_bdevs": 3, 00:13:21.696 "num_base_bdevs_discovered": 3, 00:13:21.696 "num_base_bdevs_operational": 3, 00:13:21.696 "base_bdevs_list": [ 00:13:21.696 { 00:13:21.696 "name": "spare", 00:13:21.696 "uuid": "8efda872-2035-552a-a019-ef9db429a061", 00:13:21.696 "is_configured": true, 00:13:21.696 "data_offset": 0, 00:13:21.696 "data_size": 65536 00:13:21.696 }, 00:13:21.696 { 00:13:21.696 "name": "BaseBdev2", 00:13:21.696 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:21.696 "is_configured": true, 00:13:21.696 "data_offset": 0, 00:13:21.696 "data_size": 65536 00:13:21.696 }, 00:13:21.696 { 00:13:21.696 "name": "BaseBdev3", 00:13:21.696 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:21.696 "is_configured": true, 00:13:21.696 "data_offset": 0, 00:13:21.696 "data_size": 65536 00:13:21.696 } 00:13:21.696 ] 00:13:21.696 }' 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.696 "name": "raid_bdev1", 00:13:21.696 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:21.696 "strip_size_kb": 64, 00:13:21.696 "state": "online", 00:13:21.696 "raid_level": "raid5f", 00:13:21.696 "superblock": false, 00:13:21.696 "num_base_bdevs": 3, 00:13:21.696 "num_base_bdevs_discovered": 3, 00:13:21.696 "num_base_bdevs_operational": 3, 00:13:21.696 "base_bdevs_list": [ 00:13:21.696 { 00:13:21.696 "name": "spare", 00:13:21.696 "uuid": "8efda872-2035-552a-a019-ef9db429a061", 00:13:21.696 "is_configured": true, 00:13:21.696 "data_offset": 0, 00:13:21.696 "data_size": 65536 00:13:21.696 }, 00:13:21.696 { 00:13:21.696 "name": "BaseBdev2", 00:13:21.696 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:21.696 "is_configured": true, 00:13:21.696 "data_offset": 0, 00:13:21.696 "data_size": 65536 00:13:21.696 }, 00:13:21.696 { 00:13:21.696 "name": "BaseBdev3", 00:13:21.696 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:21.696 "is_configured": true, 00:13:21.696 "data_offset": 0, 00:13:21.696 "data_size": 65536 00:13:21.696 } 00:13:21.696 ] 00:13:21.696 }' 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.696 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.956 "name": "raid_bdev1", 00:13:21.956 "uuid": "c4779804-61e5-4cae-bd1c-929cadbd427c", 00:13:21.956 "strip_size_kb": 64, 00:13:21.956 "state": "online", 00:13:21.956 "raid_level": "raid5f", 00:13:21.956 "superblock": false, 00:13:21.956 "num_base_bdevs": 3, 00:13:21.956 "num_base_bdevs_discovered": 3, 00:13:21.956 "num_base_bdevs_operational": 3, 00:13:21.956 "base_bdevs_list": [ 00:13:21.956 { 00:13:21.956 "name": "spare", 00:13:21.956 "uuid": "8efda872-2035-552a-a019-ef9db429a061", 00:13:21.956 "is_configured": true, 00:13:21.956 "data_offset": 0, 00:13:21.956 "data_size": 65536 00:13:21.956 }, 00:13:21.956 { 00:13:21.956 "name": "BaseBdev2", 00:13:21.956 "uuid": "eb20de8e-c499-5db4-994f-8e71ae879f4e", 00:13:21.956 "is_configured": true, 00:13:21.956 "data_offset": 0, 00:13:21.956 "data_size": 65536 00:13:21.956 }, 00:13:21.956 { 00:13:21.956 "name": "BaseBdev3", 00:13:21.956 "uuid": "9695e317-7ea3-5893-835a-1a806553d0e5", 00:13:21.956 "is_configured": true, 00:13:21.956 "data_offset": 0, 00:13:21.956 "data_size": 65536 00:13:21.956 } 00:13:21.956 ] 00:13:21.956 }' 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.956 23:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.216 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.216 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.216 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.216 [2024-11-02 23:53:16.194930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.216 [2024-11-02 23:53:16.194963] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.216 [2024-11-02 23:53:16.195044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.216 [2024-11-02 23:53:16.195123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.216 [2024-11-02 23:53:16.195138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:22.216 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.216 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.217 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:22.477 /dev/nbd0 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.477 1+0 records in 00:13:22.477 1+0 records out 00:13:22.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241665 s, 16.9 MB/s 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.477 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:22.737 /dev/nbd1 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.737 1+0 records in 00:13:22.737 1+0 records out 00:13:22.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521356 s, 7.9 MB/s 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.737 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:22.996 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:22.996 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.996 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:22.996 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:22.996 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:22.996 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.996 23:53:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:22.996 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:22.996 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:22.996 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:22.996 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.996 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.996 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:22.996 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:22.996 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.996 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.996 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:23.256 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:23.256 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:23.256 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:23.256 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.256 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.256 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:23.256 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 91885 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 91885 ']' 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 91885 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91885 00:13:23.257 killing process with pid 91885 00:13:23.257 Received shutdown signal, test time was about 60.000000 seconds 00:13:23.257 00:13:23.257 Latency(us) 00:13:23.257 [2024-11-02T23:53:17.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.257 [2024-11-02T23:53:17.352Z] =================================================================================================================== 00:13:23.257 [2024-11-02T23:53:17.352Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91885' 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 91885 00:13:23.257 [2024-11-02 23:53:17.302473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.257 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 91885 00:13:23.257 [2024-11-02 23:53:17.342283] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.517 23:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:23.517 ************************************ 00:13:23.517 END TEST raid5f_rebuild_test 00:13:23.517 ************************************ 00:13:23.517 00:13:23.517 real 0m13.561s 00:13:23.517 user 0m17.020s 00:13:23.517 sys 0m1.935s 00:13:23.517 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:23.517 23:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.517 23:53:17 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:23.517 23:53:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:23.517 23:53:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:23.517 23:53:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.777 ************************************ 00:13:23.777 START TEST raid5f_rebuild_test_sb 00:13:23.777 ************************************ 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:23.777 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92310 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92310 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 92310 ']' 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:23.778 23:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.778 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:23.778 Zero copy mechanism will not be used. 00:13:23.778 [2024-11-02 23:53:17.720612] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:13:23.778 [2024-11-02 23:53:17.720759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92310 ] 00:13:24.036 [2024-11-02 23:53:17.875801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.036 [2024-11-02 23:53:17.901069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.036 [2024-11-02 23:53:17.943376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.036 [2024-11-02 23:53:17.943407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.612 BaseBdev1_malloc 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.612 [2024-11-02 23:53:18.549124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:24.612 [2024-11-02 23:53:18.549177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.612 [2024-11-02 23:53:18.549221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:24.612 [2024-11-02 23:53:18.549235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.612 [2024-11-02 23:53:18.551210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.612 [2024-11-02 23:53:18.551247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:24.612 BaseBdev1 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.612 BaseBdev2_malloc 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.612 [2024-11-02 23:53:18.577446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:24.612 [2024-11-02 23:53:18.577488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.612 [2024-11-02 23:53:18.577523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:24.612 [2024-11-02 23:53:18.577532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.612 [2024-11-02 23:53:18.579545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.612 [2024-11-02 23:53:18.579587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:24.612 BaseBdev2 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.612 BaseBdev3_malloc 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.612 [2024-11-02 23:53:18.605661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:24.612 [2024-11-02 23:53:18.605706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.612 [2024-11-02 23:53:18.605743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:24.612 [2024-11-02 23:53:18.605751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.612 [2024-11-02 23:53:18.607733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.612 [2024-11-02 23:53:18.607776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:24.612 BaseBdev3 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.612 spare_malloc 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.612 spare_delay 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.612 [2024-11-02 23:53:18.664239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:24.612 [2024-11-02 23:53:18.664306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.612 [2024-11-02 23:53:18.664344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:24.612 [2024-11-02 23:53:18.664357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.612 [2024-11-02 23:53:18.667377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.612 [2024-11-02 23:53:18.667424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:24.612 spare 00:13:24.612 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.613 [2024-11-02 23:53:18.676350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.613 [2024-11-02 23:53:18.678151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.613 [2024-11-02 23:53:18.678214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.613 [2024-11-02 23:53:18.678367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:24.613 [2024-11-02 23:53:18.678381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:24.613 [2024-11-02 23:53:18.678624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:24.613 [2024-11-02 23:53:18.679061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:24.613 [2024-11-02 23:53:18.679114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:24.613 [2024-11-02 23:53:18.679240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.613 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.875 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.875 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.875 "name": "raid_bdev1", 00:13:24.875 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:24.875 "strip_size_kb": 64, 00:13:24.875 "state": "online", 00:13:24.875 "raid_level": "raid5f", 00:13:24.875 "superblock": true, 00:13:24.875 "num_base_bdevs": 3, 00:13:24.875 "num_base_bdevs_discovered": 3, 00:13:24.875 "num_base_bdevs_operational": 3, 00:13:24.875 "base_bdevs_list": [ 00:13:24.875 { 00:13:24.875 "name": "BaseBdev1", 00:13:24.875 "uuid": "a3205f43-d663-568c-b3d0-92804bd274d9", 00:13:24.875 "is_configured": true, 00:13:24.875 "data_offset": 2048, 00:13:24.875 "data_size": 63488 00:13:24.875 }, 00:13:24.875 { 00:13:24.875 "name": "BaseBdev2", 00:13:24.875 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:24.875 "is_configured": true, 00:13:24.875 "data_offset": 2048, 00:13:24.875 "data_size": 63488 00:13:24.875 }, 00:13:24.875 { 00:13:24.875 "name": "BaseBdev3", 00:13:24.875 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:24.875 "is_configured": true, 00:13:24.875 "data_offset": 2048, 00:13:24.875 "data_size": 63488 00:13:24.875 } 00:13:24.875 ] 00:13:24.875 }' 00:13:24.875 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.875 23:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:25.135 [2024-11-02 23:53:19.116000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.135 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:25.395 [2024-11-02 23:53:19.387405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:25.395 /dev/nbd0 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.395 1+0 records in 00:13:25.395 1+0 records out 00:13:25.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402873 s, 10.2 MB/s 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:25.395 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:25.965 496+0 records in 00:13:25.965 496+0 records out 00:13:25.965 65011712 bytes (65 MB, 62 MiB) copied, 0.279386 s, 233 MB/s 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.965 [2024-11-02 23:53:19.963953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 [2024-11-02 23:53:19.981883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.965 23:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 23:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.965 23:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.965 "name": "raid_bdev1", 00:13:25.965 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:25.965 "strip_size_kb": 64, 00:13:25.965 "state": "online", 00:13:25.965 "raid_level": "raid5f", 00:13:25.965 "superblock": true, 00:13:25.965 "num_base_bdevs": 3, 00:13:25.965 "num_base_bdevs_discovered": 2, 00:13:25.965 "num_base_bdevs_operational": 2, 00:13:25.965 "base_bdevs_list": [ 00:13:25.965 { 00:13:25.965 "name": null, 00:13:25.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.965 "is_configured": false, 00:13:25.965 "data_offset": 0, 00:13:25.965 "data_size": 63488 00:13:25.965 }, 00:13:25.965 { 00:13:25.965 "name": "BaseBdev2", 00:13:25.965 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:25.965 "is_configured": true, 00:13:25.965 "data_offset": 2048, 00:13:25.965 "data_size": 63488 00:13:25.965 }, 00:13:25.965 { 00:13:25.965 "name": "BaseBdev3", 00:13:25.965 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:25.965 "is_configured": true, 00:13:25.965 "data_offset": 2048, 00:13:25.965 "data_size": 63488 00:13:25.965 } 00:13:25.965 ] 00:13:25.965 }' 00:13:25.965 23:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.965 23:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.545 23:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.545 23:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.545 23:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.545 [2024-11-02 23:53:20.413162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.545 [2024-11-02 23:53:20.417638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:13:26.545 23:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.545 23:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:26.545 [2024-11-02 23:53:20.419811] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.486 "name": "raid_bdev1", 00:13:27.486 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:27.486 "strip_size_kb": 64, 00:13:27.486 "state": "online", 00:13:27.486 "raid_level": "raid5f", 00:13:27.486 "superblock": true, 00:13:27.486 "num_base_bdevs": 3, 00:13:27.486 "num_base_bdevs_discovered": 3, 00:13:27.486 "num_base_bdevs_operational": 3, 00:13:27.486 "process": { 00:13:27.486 "type": "rebuild", 00:13:27.486 "target": "spare", 00:13:27.486 "progress": { 00:13:27.486 "blocks": 20480, 00:13:27.486 "percent": 16 00:13:27.486 } 00:13:27.486 }, 00:13:27.486 "base_bdevs_list": [ 00:13:27.486 { 00:13:27.486 "name": "spare", 00:13:27.486 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:27.486 "is_configured": true, 00:13:27.486 "data_offset": 2048, 00:13:27.486 "data_size": 63488 00:13:27.486 }, 00:13:27.486 { 00:13:27.486 "name": "BaseBdev2", 00:13:27.486 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:27.486 "is_configured": true, 00:13:27.486 "data_offset": 2048, 00:13:27.486 "data_size": 63488 00:13:27.486 }, 00:13:27.486 { 00:13:27.486 "name": "BaseBdev3", 00:13:27.486 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:27.486 "is_configured": true, 00:13:27.486 "data_offset": 2048, 00:13:27.486 "data_size": 63488 00:13:27.486 } 00:13:27.486 ] 00:13:27.486 }' 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.486 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.486 [2024-11-02 23:53:21.572414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.746 [2024-11-02 23:53:21.626599] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:27.746 [2024-11-02 23:53:21.626720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.746 [2024-11-02 23:53:21.626738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.746 [2024-11-02 23:53:21.626751] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.746 "name": "raid_bdev1", 00:13:27.746 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:27.746 "strip_size_kb": 64, 00:13:27.746 "state": "online", 00:13:27.746 "raid_level": "raid5f", 00:13:27.746 "superblock": true, 00:13:27.746 "num_base_bdevs": 3, 00:13:27.746 "num_base_bdevs_discovered": 2, 00:13:27.746 "num_base_bdevs_operational": 2, 00:13:27.746 "base_bdevs_list": [ 00:13:27.746 { 00:13:27.746 "name": null, 00:13:27.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.746 "is_configured": false, 00:13:27.746 "data_offset": 0, 00:13:27.746 "data_size": 63488 00:13:27.746 }, 00:13:27.746 { 00:13:27.746 "name": "BaseBdev2", 00:13:27.746 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:27.746 "is_configured": true, 00:13:27.746 "data_offset": 2048, 00:13:27.746 "data_size": 63488 00:13:27.746 }, 00:13:27.746 { 00:13:27.746 "name": "BaseBdev3", 00:13:27.746 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:27.746 "is_configured": true, 00:13:27.746 "data_offset": 2048, 00:13:27.746 "data_size": 63488 00:13:27.746 } 00:13:27.746 ] 00:13:27.746 }' 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.746 23:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.006 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.006 "name": "raid_bdev1", 00:13:28.006 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:28.006 "strip_size_kb": 64, 00:13:28.006 "state": "online", 00:13:28.006 "raid_level": "raid5f", 00:13:28.006 "superblock": true, 00:13:28.006 "num_base_bdevs": 3, 00:13:28.006 "num_base_bdevs_discovered": 2, 00:13:28.006 "num_base_bdevs_operational": 2, 00:13:28.006 "base_bdevs_list": [ 00:13:28.006 { 00:13:28.006 "name": null, 00:13:28.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.006 "is_configured": false, 00:13:28.006 "data_offset": 0, 00:13:28.006 "data_size": 63488 00:13:28.006 }, 00:13:28.006 { 00:13:28.006 "name": "BaseBdev2", 00:13:28.006 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:28.006 "is_configured": true, 00:13:28.006 "data_offset": 2048, 00:13:28.006 "data_size": 63488 00:13:28.006 }, 00:13:28.006 { 00:13:28.006 "name": "BaseBdev3", 00:13:28.006 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:28.006 "is_configured": true, 00:13:28.006 "data_offset": 2048, 00:13:28.006 "data_size": 63488 00:13:28.006 } 00:13:28.006 ] 00:13:28.007 }' 00:13:28.007 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.267 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.267 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.267 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.267 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:28.267 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.267 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.267 [2024-11-02 23:53:22.191877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.267 [2024-11-02 23:53:22.195957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:13:28.267 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.267 23:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:28.267 [2024-11-02 23:53:22.198034] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.206 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.206 "name": "raid_bdev1", 00:13:29.206 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:29.206 "strip_size_kb": 64, 00:13:29.206 "state": "online", 00:13:29.206 "raid_level": "raid5f", 00:13:29.206 "superblock": true, 00:13:29.206 "num_base_bdevs": 3, 00:13:29.206 "num_base_bdevs_discovered": 3, 00:13:29.206 "num_base_bdevs_operational": 3, 00:13:29.206 "process": { 00:13:29.207 "type": "rebuild", 00:13:29.207 "target": "spare", 00:13:29.207 "progress": { 00:13:29.207 "blocks": 20480, 00:13:29.207 "percent": 16 00:13:29.207 } 00:13:29.207 }, 00:13:29.207 "base_bdevs_list": [ 00:13:29.207 { 00:13:29.207 "name": "spare", 00:13:29.207 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:29.207 "is_configured": true, 00:13:29.207 "data_offset": 2048, 00:13:29.207 "data_size": 63488 00:13:29.207 }, 00:13:29.207 { 00:13:29.207 "name": "BaseBdev2", 00:13:29.207 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:29.207 "is_configured": true, 00:13:29.207 "data_offset": 2048, 00:13:29.207 "data_size": 63488 00:13:29.207 }, 00:13:29.207 { 00:13:29.207 "name": "BaseBdev3", 00:13:29.207 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:29.207 "is_configured": true, 00:13:29.207 "data_offset": 2048, 00:13:29.207 "data_size": 63488 00:13:29.207 } 00:13:29.207 ] 00:13:29.207 }' 00:13:29.207 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:29.467 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=462 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.467 "name": "raid_bdev1", 00:13:29.467 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:29.467 "strip_size_kb": 64, 00:13:29.467 "state": "online", 00:13:29.467 "raid_level": "raid5f", 00:13:29.467 "superblock": true, 00:13:29.467 "num_base_bdevs": 3, 00:13:29.467 "num_base_bdevs_discovered": 3, 00:13:29.467 "num_base_bdevs_operational": 3, 00:13:29.467 "process": { 00:13:29.467 "type": "rebuild", 00:13:29.467 "target": "spare", 00:13:29.467 "progress": { 00:13:29.467 "blocks": 22528, 00:13:29.467 "percent": 17 00:13:29.467 } 00:13:29.467 }, 00:13:29.467 "base_bdevs_list": [ 00:13:29.467 { 00:13:29.467 "name": "spare", 00:13:29.467 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:29.467 "is_configured": true, 00:13:29.467 "data_offset": 2048, 00:13:29.467 "data_size": 63488 00:13:29.467 }, 00:13:29.467 { 00:13:29.467 "name": "BaseBdev2", 00:13:29.467 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:29.467 "is_configured": true, 00:13:29.467 "data_offset": 2048, 00:13:29.467 "data_size": 63488 00:13:29.467 }, 00:13:29.467 { 00:13:29.467 "name": "BaseBdev3", 00:13:29.467 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:29.467 "is_configured": true, 00:13:29.467 "data_offset": 2048, 00:13:29.467 "data_size": 63488 00:13:29.467 } 00:13:29.467 ] 00:13:29.467 }' 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.467 23:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.847 "name": "raid_bdev1", 00:13:30.847 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:30.847 "strip_size_kb": 64, 00:13:30.847 "state": "online", 00:13:30.847 "raid_level": "raid5f", 00:13:30.847 "superblock": true, 00:13:30.847 "num_base_bdevs": 3, 00:13:30.847 "num_base_bdevs_discovered": 3, 00:13:30.847 "num_base_bdevs_operational": 3, 00:13:30.847 "process": { 00:13:30.847 "type": "rebuild", 00:13:30.847 "target": "spare", 00:13:30.847 "progress": { 00:13:30.847 "blocks": 45056, 00:13:30.847 "percent": 35 00:13:30.847 } 00:13:30.847 }, 00:13:30.847 "base_bdevs_list": [ 00:13:30.847 { 00:13:30.847 "name": "spare", 00:13:30.847 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:30.847 "is_configured": true, 00:13:30.847 "data_offset": 2048, 00:13:30.847 "data_size": 63488 00:13:30.847 }, 00:13:30.847 { 00:13:30.847 "name": "BaseBdev2", 00:13:30.847 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:30.847 "is_configured": true, 00:13:30.847 "data_offset": 2048, 00:13:30.847 "data_size": 63488 00:13:30.847 }, 00:13:30.847 { 00:13:30.847 "name": "BaseBdev3", 00:13:30.847 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:30.847 "is_configured": true, 00:13:30.847 "data_offset": 2048, 00:13:30.847 "data_size": 63488 00:13:30.847 } 00:13:30.847 ] 00:13:30.847 }' 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.847 23:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.788 "name": "raid_bdev1", 00:13:31.788 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:31.788 "strip_size_kb": 64, 00:13:31.788 "state": "online", 00:13:31.788 "raid_level": "raid5f", 00:13:31.788 "superblock": true, 00:13:31.788 "num_base_bdevs": 3, 00:13:31.788 "num_base_bdevs_discovered": 3, 00:13:31.788 "num_base_bdevs_operational": 3, 00:13:31.788 "process": { 00:13:31.788 "type": "rebuild", 00:13:31.788 "target": "spare", 00:13:31.788 "progress": { 00:13:31.788 "blocks": 69632, 00:13:31.788 "percent": 54 00:13:31.788 } 00:13:31.788 }, 00:13:31.788 "base_bdevs_list": [ 00:13:31.788 { 00:13:31.788 "name": "spare", 00:13:31.788 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:31.788 "is_configured": true, 00:13:31.788 "data_offset": 2048, 00:13:31.788 "data_size": 63488 00:13:31.788 }, 00:13:31.788 { 00:13:31.788 "name": "BaseBdev2", 00:13:31.788 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:31.788 "is_configured": true, 00:13:31.788 "data_offset": 2048, 00:13:31.788 "data_size": 63488 00:13:31.788 }, 00:13:31.788 { 00:13:31.788 "name": "BaseBdev3", 00:13:31.788 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:31.788 "is_configured": true, 00:13:31.788 "data_offset": 2048, 00:13:31.788 "data_size": 63488 00:13:31.788 } 00:13:31.788 ] 00:13:31.788 }' 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.788 23:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.727 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.987 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.987 "name": "raid_bdev1", 00:13:32.987 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:32.987 "strip_size_kb": 64, 00:13:32.987 "state": "online", 00:13:32.987 "raid_level": "raid5f", 00:13:32.987 "superblock": true, 00:13:32.987 "num_base_bdevs": 3, 00:13:32.987 "num_base_bdevs_discovered": 3, 00:13:32.987 "num_base_bdevs_operational": 3, 00:13:32.987 "process": { 00:13:32.987 "type": "rebuild", 00:13:32.987 "target": "spare", 00:13:32.987 "progress": { 00:13:32.987 "blocks": 92160, 00:13:32.987 "percent": 72 00:13:32.987 } 00:13:32.987 }, 00:13:32.987 "base_bdevs_list": [ 00:13:32.987 { 00:13:32.987 "name": "spare", 00:13:32.987 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:32.987 "is_configured": true, 00:13:32.987 "data_offset": 2048, 00:13:32.987 "data_size": 63488 00:13:32.987 }, 00:13:32.987 { 00:13:32.987 "name": "BaseBdev2", 00:13:32.988 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:32.988 "is_configured": true, 00:13:32.988 "data_offset": 2048, 00:13:32.988 "data_size": 63488 00:13:32.988 }, 00:13:32.988 { 00:13:32.988 "name": "BaseBdev3", 00:13:32.988 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:32.988 "is_configured": true, 00:13:32.988 "data_offset": 2048, 00:13:32.988 "data_size": 63488 00:13:32.988 } 00:13:32.988 ] 00:13:32.988 }' 00:13:32.988 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.988 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.988 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.988 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.988 23:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.929 "name": "raid_bdev1", 00:13:33.929 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:33.929 "strip_size_kb": 64, 00:13:33.929 "state": "online", 00:13:33.929 "raid_level": "raid5f", 00:13:33.929 "superblock": true, 00:13:33.929 "num_base_bdevs": 3, 00:13:33.929 "num_base_bdevs_discovered": 3, 00:13:33.929 "num_base_bdevs_operational": 3, 00:13:33.929 "process": { 00:13:33.929 "type": "rebuild", 00:13:33.929 "target": "spare", 00:13:33.929 "progress": { 00:13:33.929 "blocks": 116736, 00:13:33.929 "percent": 91 00:13:33.929 } 00:13:33.929 }, 00:13:33.929 "base_bdevs_list": [ 00:13:33.929 { 00:13:33.929 "name": "spare", 00:13:33.929 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:33.929 "is_configured": true, 00:13:33.929 "data_offset": 2048, 00:13:33.929 "data_size": 63488 00:13:33.929 }, 00:13:33.929 { 00:13:33.929 "name": "BaseBdev2", 00:13:33.929 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:33.929 "is_configured": true, 00:13:33.929 "data_offset": 2048, 00:13:33.929 "data_size": 63488 00:13:33.929 }, 00:13:33.929 { 00:13:33.929 "name": "BaseBdev3", 00:13:33.929 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:33.929 "is_configured": true, 00:13:33.929 "data_offset": 2048, 00:13:33.929 "data_size": 63488 00:13:33.929 } 00:13:33.929 ] 00:13:33.929 }' 00:13:33.929 23:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.189 23:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.189 23:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.189 23:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.189 23:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:34.449 [2024-11-02 23:53:28.430837] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:34.449 [2024-11-02 23:53:28.430938] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:34.449 [2024-11-02 23:53:28.431054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.024 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.286 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.286 "name": "raid_bdev1", 00:13:35.286 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:35.286 "strip_size_kb": 64, 00:13:35.286 "state": "online", 00:13:35.286 "raid_level": "raid5f", 00:13:35.286 "superblock": true, 00:13:35.286 "num_base_bdevs": 3, 00:13:35.286 "num_base_bdevs_discovered": 3, 00:13:35.286 "num_base_bdevs_operational": 3, 00:13:35.286 "base_bdevs_list": [ 00:13:35.286 { 00:13:35.286 "name": "spare", 00:13:35.286 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:35.286 "is_configured": true, 00:13:35.286 "data_offset": 2048, 00:13:35.286 "data_size": 63488 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "name": "BaseBdev2", 00:13:35.286 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:35.286 "is_configured": true, 00:13:35.286 "data_offset": 2048, 00:13:35.286 "data_size": 63488 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "name": "BaseBdev3", 00:13:35.286 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:35.286 "is_configured": true, 00:13:35.286 "data_offset": 2048, 00:13:35.286 "data_size": 63488 00:13:35.286 } 00:13:35.286 ] 00:13:35.286 }' 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.287 "name": "raid_bdev1", 00:13:35.287 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:35.287 "strip_size_kb": 64, 00:13:35.287 "state": "online", 00:13:35.287 "raid_level": "raid5f", 00:13:35.287 "superblock": true, 00:13:35.287 "num_base_bdevs": 3, 00:13:35.287 "num_base_bdevs_discovered": 3, 00:13:35.287 "num_base_bdevs_operational": 3, 00:13:35.287 "base_bdevs_list": [ 00:13:35.287 { 00:13:35.287 "name": "spare", 00:13:35.287 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:35.287 "is_configured": true, 00:13:35.287 "data_offset": 2048, 00:13:35.287 "data_size": 63488 00:13:35.287 }, 00:13:35.287 { 00:13:35.287 "name": "BaseBdev2", 00:13:35.287 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:35.287 "is_configured": true, 00:13:35.287 "data_offset": 2048, 00:13:35.287 "data_size": 63488 00:13:35.287 }, 00:13:35.287 { 00:13:35.287 "name": "BaseBdev3", 00:13:35.287 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:35.287 "is_configured": true, 00:13:35.287 "data_offset": 2048, 00:13:35.287 "data_size": 63488 00:13:35.287 } 00:13:35.287 ] 00:13:35.287 }' 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.287 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.548 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.548 "name": "raid_bdev1", 00:13:35.548 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:35.548 "strip_size_kb": 64, 00:13:35.548 "state": "online", 00:13:35.548 "raid_level": "raid5f", 00:13:35.548 "superblock": true, 00:13:35.548 "num_base_bdevs": 3, 00:13:35.548 "num_base_bdevs_discovered": 3, 00:13:35.548 "num_base_bdevs_operational": 3, 00:13:35.548 "base_bdevs_list": [ 00:13:35.548 { 00:13:35.548 "name": "spare", 00:13:35.548 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:35.548 "is_configured": true, 00:13:35.548 "data_offset": 2048, 00:13:35.548 "data_size": 63488 00:13:35.548 }, 00:13:35.548 { 00:13:35.548 "name": "BaseBdev2", 00:13:35.548 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:35.548 "is_configured": true, 00:13:35.548 "data_offset": 2048, 00:13:35.548 "data_size": 63488 00:13:35.548 }, 00:13:35.548 { 00:13:35.548 "name": "BaseBdev3", 00:13:35.548 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:35.548 "is_configured": true, 00:13:35.548 "data_offset": 2048, 00:13:35.548 "data_size": 63488 00:13:35.548 } 00:13:35.548 ] 00:13:35.548 }' 00:13:35.548 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.548 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.833 [2024-11-02 23:53:29.806138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.833 [2024-11-02 23:53:29.806170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.833 [2024-11-02 23:53:29.806245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.833 [2024-11-02 23:53:29.806318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.833 [2024-11-02 23:53:29.806327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:35.833 23:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:36.093 /dev/nbd0 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.093 1+0 records in 00:13:36.093 1+0 records out 00:13:36.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328989 s, 12.5 MB/s 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:36.093 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:36.353 /dev/nbd1 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.353 1+0 records in 00:13:36.353 1+0 records out 00:13:36.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501089 s, 8.2 MB/s 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.353 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:36.613 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.613 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.613 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.613 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.613 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.613 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.613 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:36.613 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.613 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.613 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.873 [2024-11-02 23:53:30.858752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:36.873 [2024-11-02 23:53:30.858853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.873 [2024-11-02 23:53:30.858894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:36.873 [2024-11-02 23:53:30.858921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.873 [2024-11-02 23:53:30.861039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.873 [2024-11-02 23:53:30.861105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:36.873 [2024-11-02 23:53:30.861223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:36.873 [2024-11-02 23:53:30.861286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.873 [2024-11-02 23:53:30.861432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.873 [2024-11-02 23:53:30.861565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.873 spare 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.873 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.873 [2024-11-02 23:53:30.961486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:36.873 [2024-11-02 23:53:30.961544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:36.873 [2024-11-02 23:53:30.961832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:13:36.873 [2024-11-02 23:53:30.962258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:36.873 [2024-11-02 23:53:30.962272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:36.873 [2024-11-02 23:53:30.962409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.140 23:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.140 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.140 "name": "raid_bdev1", 00:13:37.140 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:37.140 "strip_size_kb": 64, 00:13:37.140 "state": "online", 00:13:37.140 "raid_level": "raid5f", 00:13:37.140 "superblock": true, 00:13:37.140 "num_base_bdevs": 3, 00:13:37.140 "num_base_bdevs_discovered": 3, 00:13:37.140 "num_base_bdevs_operational": 3, 00:13:37.140 "base_bdevs_list": [ 00:13:37.140 { 00:13:37.140 "name": "spare", 00:13:37.140 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:37.140 "is_configured": true, 00:13:37.140 "data_offset": 2048, 00:13:37.140 "data_size": 63488 00:13:37.140 }, 00:13:37.140 { 00:13:37.140 "name": "BaseBdev2", 00:13:37.140 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:37.140 "is_configured": true, 00:13:37.140 "data_offset": 2048, 00:13:37.140 "data_size": 63488 00:13:37.140 }, 00:13:37.140 { 00:13:37.140 "name": "BaseBdev3", 00:13:37.140 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:37.140 "is_configured": true, 00:13:37.140 "data_offset": 2048, 00:13:37.140 "data_size": 63488 00:13:37.140 } 00:13:37.140 ] 00:13:37.140 }' 00:13:37.140 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.140 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.401 "name": "raid_bdev1", 00:13:37.401 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:37.401 "strip_size_kb": 64, 00:13:37.401 "state": "online", 00:13:37.401 "raid_level": "raid5f", 00:13:37.401 "superblock": true, 00:13:37.401 "num_base_bdevs": 3, 00:13:37.401 "num_base_bdevs_discovered": 3, 00:13:37.401 "num_base_bdevs_operational": 3, 00:13:37.401 "base_bdevs_list": [ 00:13:37.401 { 00:13:37.401 "name": "spare", 00:13:37.401 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:37.401 "is_configured": true, 00:13:37.401 "data_offset": 2048, 00:13:37.401 "data_size": 63488 00:13:37.401 }, 00:13:37.401 { 00:13:37.401 "name": "BaseBdev2", 00:13:37.401 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:37.401 "is_configured": true, 00:13:37.401 "data_offset": 2048, 00:13:37.401 "data_size": 63488 00:13:37.401 }, 00:13:37.401 { 00:13:37.401 "name": "BaseBdev3", 00:13:37.401 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:37.401 "is_configured": true, 00:13:37.401 "data_offset": 2048, 00:13:37.401 "data_size": 63488 00:13:37.401 } 00:13:37.401 ] 00:13:37.401 }' 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.401 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.662 [2024-11-02 23:53:31.518659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.662 "name": "raid_bdev1", 00:13:37.662 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:37.662 "strip_size_kb": 64, 00:13:37.662 "state": "online", 00:13:37.662 "raid_level": "raid5f", 00:13:37.662 "superblock": true, 00:13:37.662 "num_base_bdevs": 3, 00:13:37.662 "num_base_bdevs_discovered": 2, 00:13:37.662 "num_base_bdevs_operational": 2, 00:13:37.662 "base_bdevs_list": [ 00:13:37.662 { 00:13:37.662 "name": null, 00:13:37.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.662 "is_configured": false, 00:13:37.662 "data_offset": 0, 00:13:37.662 "data_size": 63488 00:13:37.662 }, 00:13:37.662 { 00:13:37.662 "name": "BaseBdev2", 00:13:37.662 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:37.662 "is_configured": true, 00:13:37.662 "data_offset": 2048, 00:13:37.662 "data_size": 63488 00:13:37.662 }, 00:13:37.662 { 00:13:37.662 "name": "BaseBdev3", 00:13:37.662 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:37.662 "is_configured": true, 00:13:37.662 "data_offset": 2048, 00:13:37.662 "data_size": 63488 00:13:37.662 } 00:13:37.662 ] 00:13:37.662 }' 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.662 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.922 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:37.922 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.922 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.922 [2024-11-02 23:53:31.926000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.922 [2024-11-02 23:53:31.926212] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:37.922 [2024-11-02 23:53:31.926278] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:37.922 [2024-11-02 23:53:31.926350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.922 [2024-11-02 23:53:31.930649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:13:37.922 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.922 23:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:37.922 [2024-11-02 23:53:31.932854] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.863 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.863 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.863 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.863 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.863 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.863 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.863 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.863 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.863 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.122 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.122 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.122 "name": "raid_bdev1", 00:13:39.122 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:39.122 "strip_size_kb": 64, 00:13:39.122 "state": "online", 00:13:39.122 "raid_level": "raid5f", 00:13:39.122 "superblock": true, 00:13:39.122 "num_base_bdevs": 3, 00:13:39.122 "num_base_bdevs_discovered": 3, 00:13:39.122 "num_base_bdevs_operational": 3, 00:13:39.122 "process": { 00:13:39.122 "type": "rebuild", 00:13:39.122 "target": "spare", 00:13:39.122 "progress": { 00:13:39.122 "blocks": 20480, 00:13:39.122 "percent": 16 00:13:39.122 } 00:13:39.122 }, 00:13:39.122 "base_bdevs_list": [ 00:13:39.122 { 00:13:39.122 "name": "spare", 00:13:39.122 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:39.122 "is_configured": true, 00:13:39.122 "data_offset": 2048, 00:13:39.122 "data_size": 63488 00:13:39.122 }, 00:13:39.122 { 00:13:39.122 "name": "BaseBdev2", 00:13:39.122 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:39.122 "is_configured": true, 00:13:39.122 "data_offset": 2048, 00:13:39.122 "data_size": 63488 00:13:39.122 }, 00:13:39.122 { 00:13:39.122 "name": "BaseBdev3", 00:13:39.122 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:39.122 "is_configured": true, 00:13:39.122 "data_offset": 2048, 00:13:39.122 "data_size": 63488 00:13:39.122 } 00:13:39.122 ] 00:13:39.122 }' 00:13:39.122 23:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.123 [2024-11-02 23:53:33.072848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.123 [2024-11-02 23:53:33.139643] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:39.123 [2024-11-02 23:53:33.139694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.123 [2024-11-02 23:53:33.139729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.123 [2024-11-02 23:53:33.139736] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.123 "name": "raid_bdev1", 00:13:39.123 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:39.123 "strip_size_kb": 64, 00:13:39.123 "state": "online", 00:13:39.123 "raid_level": "raid5f", 00:13:39.123 "superblock": true, 00:13:39.123 "num_base_bdevs": 3, 00:13:39.123 "num_base_bdevs_discovered": 2, 00:13:39.123 "num_base_bdevs_operational": 2, 00:13:39.123 "base_bdevs_list": [ 00:13:39.123 { 00:13:39.123 "name": null, 00:13:39.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.123 "is_configured": false, 00:13:39.123 "data_offset": 0, 00:13:39.123 "data_size": 63488 00:13:39.123 }, 00:13:39.123 { 00:13:39.123 "name": "BaseBdev2", 00:13:39.123 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:39.123 "is_configured": true, 00:13:39.123 "data_offset": 2048, 00:13:39.123 "data_size": 63488 00:13:39.123 }, 00:13:39.123 { 00:13:39.123 "name": "BaseBdev3", 00:13:39.123 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:39.123 "is_configured": true, 00:13:39.123 "data_offset": 2048, 00:13:39.123 "data_size": 63488 00:13:39.123 } 00:13:39.123 ] 00:13:39.123 }' 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.123 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.693 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:39.693 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.693 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.693 [2024-11-02 23:53:33.576196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:39.693 [2024-11-02 23:53:33.576289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.693 [2024-11-02 23:53:33.576326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:39.693 [2024-11-02 23:53:33.576353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.693 [2024-11-02 23:53:33.576816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.693 [2024-11-02 23:53:33.576873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:39.693 [2024-11-02 23:53:33.576973] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:39.693 [2024-11-02 23:53:33.577012] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:39.693 [2024-11-02 23:53:33.577054] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:39.693 [2024-11-02 23:53:33.577137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.693 [2024-11-02 23:53:33.581294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:13:39.693 spare 00:13:39.693 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.693 23:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:39.693 [2024-11-02 23:53:33.583406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.634 "name": "raid_bdev1", 00:13:40.634 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:40.634 "strip_size_kb": 64, 00:13:40.634 "state": "online", 00:13:40.634 "raid_level": "raid5f", 00:13:40.634 "superblock": true, 00:13:40.634 "num_base_bdevs": 3, 00:13:40.634 "num_base_bdevs_discovered": 3, 00:13:40.634 "num_base_bdevs_operational": 3, 00:13:40.634 "process": { 00:13:40.634 "type": "rebuild", 00:13:40.634 "target": "spare", 00:13:40.634 "progress": { 00:13:40.634 "blocks": 20480, 00:13:40.634 "percent": 16 00:13:40.634 } 00:13:40.634 }, 00:13:40.634 "base_bdevs_list": [ 00:13:40.634 { 00:13:40.634 "name": "spare", 00:13:40.634 "uuid": "32a8a41f-2787-5b17-81a7-8f30ebe71279", 00:13:40.634 "is_configured": true, 00:13:40.634 "data_offset": 2048, 00:13:40.634 "data_size": 63488 00:13:40.634 }, 00:13:40.634 { 00:13:40.634 "name": "BaseBdev2", 00:13:40.634 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:40.634 "is_configured": true, 00:13:40.634 "data_offset": 2048, 00:13:40.634 "data_size": 63488 00:13:40.634 }, 00:13:40.634 { 00:13:40.634 "name": "BaseBdev3", 00:13:40.634 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:40.634 "is_configured": true, 00:13:40.634 "data_offset": 2048, 00:13:40.634 "data_size": 63488 00:13:40.634 } 00:13:40.634 ] 00:13:40.634 }' 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.634 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.894 [2024-11-02 23:53:34.727553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:40.894 [2024-11-02 23:53:34.789992] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:40.894 [2024-11-02 23:53:34.790066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.894 [2024-11-02 23:53:34.790082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:40.894 [2024-11-02 23:53:34.790093] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:40.894 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.894 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:40.894 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.894 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.894 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.894 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.894 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.894 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.894 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.895 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.895 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.895 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.895 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.895 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.895 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.895 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.895 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.895 "name": "raid_bdev1", 00:13:40.895 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:40.895 "strip_size_kb": 64, 00:13:40.895 "state": "online", 00:13:40.895 "raid_level": "raid5f", 00:13:40.895 "superblock": true, 00:13:40.895 "num_base_bdevs": 3, 00:13:40.895 "num_base_bdevs_discovered": 2, 00:13:40.895 "num_base_bdevs_operational": 2, 00:13:40.895 "base_bdevs_list": [ 00:13:40.895 { 00:13:40.895 "name": null, 00:13:40.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.895 "is_configured": false, 00:13:40.895 "data_offset": 0, 00:13:40.895 "data_size": 63488 00:13:40.895 }, 00:13:40.895 { 00:13:40.895 "name": "BaseBdev2", 00:13:40.895 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:40.895 "is_configured": true, 00:13:40.895 "data_offset": 2048, 00:13:40.895 "data_size": 63488 00:13:40.895 }, 00:13:40.895 { 00:13:40.895 "name": "BaseBdev3", 00:13:40.895 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:40.895 "is_configured": true, 00:13:40.895 "data_offset": 2048, 00:13:40.895 "data_size": 63488 00:13:40.895 } 00:13:40.895 ] 00:13:40.895 }' 00:13:40.895 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.895 23:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.466 "name": "raid_bdev1", 00:13:41.466 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:41.466 "strip_size_kb": 64, 00:13:41.466 "state": "online", 00:13:41.466 "raid_level": "raid5f", 00:13:41.466 "superblock": true, 00:13:41.466 "num_base_bdevs": 3, 00:13:41.466 "num_base_bdevs_discovered": 2, 00:13:41.466 "num_base_bdevs_operational": 2, 00:13:41.466 "base_bdevs_list": [ 00:13:41.466 { 00:13:41.466 "name": null, 00:13:41.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.466 "is_configured": false, 00:13:41.466 "data_offset": 0, 00:13:41.466 "data_size": 63488 00:13:41.466 }, 00:13:41.466 { 00:13:41.466 "name": "BaseBdev2", 00:13:41.466 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:41.466 "is_configured": true, 00:13:41.466 "data_offset": 2048, 00:13:41.466 "data_size": 63488 00:13:41.466 }, 00:13:41.466 { 00:13:41.466 "name": "BaseBdev3", 00:13:41.466 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:41.466 "is_configured": true, 00:13:41.466 "data_offset": 2048, 00:13:41.466 "data_size": 63488 00:13:41.466 } 00:13:41.466 ] 00:13:41.466 }' 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.466 [2024-11-02 23:53:35.438381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:41.466 [2024-11-02 23:53:35.438486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.466 [2024-11-02 23:53:35.438533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:41.466 [2024-11-02 23:53:35.438549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.466 [2024-11-02 23:53:35.438932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.466 [2024-11-02 23:53:35.438953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:41.466 [2024-11-02 23:53:35.439019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:41.466 [2024-11-02 23:53:35.439041] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:41.466 [2024-11-02 23:53:35.439049] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:41.466 [2024-11-02 23:53:35.439070] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:41.466 BaseBdev1 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.466 23:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:42.407 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:42.407 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.407 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.407 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.407 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.407 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.407 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.407 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.407 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.407 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.408 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.408 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.408 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.408 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.408 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.666 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.666 "name": "raid_bdev1", 00:13:42.666 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:42.666 "strip_size_kb": 64, 00:13:42.666 "state": "online", 00:13:42.666 "raid_level": "raid5f", 00:13:42.666 "superblock": true, 00:13:42.666 "num_base_bdevs": 3, 00:13:42.666 "num_base_bdevs_discovered": 2, 00:13:42.666 "num_base_bdevs_operational": 2, 00:13:42.666 "base_bdevs_list": [ 00:13:42.666 { 00:13:42.666 "name": null, 00:13:42.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.666 "is_configured": false, 00:13:42.666 "data_offset": 0, 00:13:42.666 "data_size": 63488 00:13:42.666 }, 00:13:42.666 { 00:13:42.666 "name": "BaseBdev2", 00:13:42.666 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:42.666 "is_configured": true, 00:13:42.666 "data_offset": 2048, 00:13:42.666 "data_size": 63488 00:13:42.666 }, 00:13:42.666 { 00:13:42.666 "name": "BaseBdev3", 00:13:42.666 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:42.666 "is_configured": true, 00:13:42.666 "data_offset": 2048, 00:13:42.666 "data_size": 63488 00:13:42.666 } 00:13:42.666 ] 00:13:42.666 }' 00:13:42.666 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.666 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.926 "name": "raid_bdev1", 00:13:42.926 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:42.926 "strip_size_kb": 64, 00:13:42.926 "state": "online", 00:13:42.926 "raid_level": "raid5f", 00:13:42.926 "superblock": true, 00:13:42.926 "num_base_bdevs": 3, 00:13:42.926 "num_base_bdevs_discovered": 2, 00:13:42.926 "num_base_bdevs_operational": 2, 00:13:42.926 "base_bdevs_list": [ 00:13:42.926 { 00:13:42.926 "name": null, 00:13:42.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.926 "is_configured": false, 00:13:42.926 "data_offset": 0, 00:13:42.926 "data_size": 63488 00:13:42.926 }, 00:13:42.926 { 00:13:42.926 "name": "BaseBdev2", 00:13:42.926 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:42.926 "is_configured": true, 00:13:42.926 "data_offset": 2048, 00:13:42.926 "data_size": 63488 00:13:42.926 }, 00:13:42.926 { 00:13:42.926 "name": "BaseBdev3", 00:13:42.926 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:42.926 "is_configured": true, 00:13:42.926 "data_offset": 2048, 00:13:42.926 "data_size": 63488 00:13:42.926 } 00:13:42.926 ] 00:13:42.926 }' 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.926 23:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.187 [2024-11-02 23:53:37.047715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.187 [2024-11-02 23:53:37.047933] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:43.187 [2024-11-02 23:53:37.047990] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:43.187 request: 00:13:43.187 { 00:13:43.187 "base_bdev": "BaseBdev1", 00:13:43.187 "raid_bdev": "raid_bdev1", 00:13:43.187 "method": "bdev_raid_add_base_bdev", 00:13:43.187 "req_id": 1 00:13:43.187 } 00:13:43.187 Got JSON-RPC error response 00:13:43.187 response: 00:13:43.187 { 00:13:43.187 "code": -22, 00:13:43.187 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:43.187 } 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.187 23:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.127 "name": "raid_bdev1", 00:13:44.127 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:44.127 "strip_size_kb": 64, 00:13:44.127 "state": "online", 00:13:44.127 "raid_level": "raid5f", 00:13:44.127 "superblock": true, 00:13:44.127 "num_base_bdevs": 3, 00:13:44.127 "num_base_bdevs_discovered": 2, 00:13:44.127 "num_base_bdevs_operational": 2, 00:13:44.127 "base_bdevs_list": [ 00:13:44.127 { 00:13:44.127 "name": null, 00:13:44.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.127 "is_configured": false, 00:13:44.127 "data_offset": 0, 00:13:44.127 "data_size": 63488 00:13:44.127 }, 00:13:44.127 { 00:13:44.127 "name": "BaseBdev2", 00:13:44.127 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:44.127 "is_configured": true, 00:13:44.127 "data_offset": 2048, 00:13:44.127 "data_size": 63488 00:13:44.127 }, 00:13:44.127 { 00:13:44.127 "name": "BaseBdev3", 00:13:44.127 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:44.127 "is_configured": true, 00:13:44.127 "data_offset": 2048, 00:13:44.127 "data_size": 63488 00:13:44.127 } 00:13:44.127 ] 00:13:44.127 }' 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.127 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.696 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.696 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.697 "name": "raid_bdev1", 00:13:44.697 "uuid": "d4bc0053-0eea-4de4-9a06-831a76444ff9", 00:13:44.697 "strip_size_kb": 64, 00:13:44.697 "state": "online", 00:13:44.697 "raid_level": "raid5f", 00:13:44.697 "superblock": true, 00:13:44.697 "num_base_bdevs": 3, 00:13:44.697 "num_base_bdevs_discovered": 2, 00:13:44.697 "num_base_bdevs_operational": 2, 00:13:44.697 "base_bdevs_list": [ 00:13:44.697 { 00:13:44.697 "name": null, 00:13:44.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.697 "is_configured": false, 00:13:44.697 "data_offset": 0, 00:13:44.697 "data_size": 63488 00:13:44.697 }, 00:13:44.697 { 00:13:44.697 "name": "BaseBdev2", 00:13:44.697 "uuid": "eac35d84-0f93-5632-9429-e61a702de10f", 00:13:44.697 "is_configured": true, 00:13:44.697 "data_offset": 2048, 00:13:44.697 "data_size": 63488 00:13:44.697 }, 00:13:44.697 { 00:13:44.697 "name": "BaseBdev3", 00:13:44.697 "uuid": "e211fb3f-714b-52b4-9c44-ff5b6e8fc0fd", 00:13:44.697 "is_configured": true, 00:13:44.697 "data_offset": 2048, 00:13:44.697 "data_size": 63488 00:13:44.697 } 00:13:44.697 ] 00:13:44.697 }' 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92310 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 92310 ']' 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 92310 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92310 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:44.697 killing process with pid 92310 00:13:44.697 Received shutdown signal, test time was about 60.000000 seconds 00:13:44.697 00:13:44.697 Latency(us) 00:13:44.697 [2024-11-02T23:53:38.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.697 [2024-11-02T23:53:38.792Z] =================================================================================================================== 00:13:44.697 [2024-11-02T23:53:38.792Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92310' 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 92310 00:13:44.697 [2024-11-02 23:53:38.664664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.697 [2024-11-02 23:53:38.664781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.697 [2024-11-02 23:53:38.664847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.697 [2024-11-02 23:53:38.664856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:44.697 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 92310 00:13:44.697 [2024-11-02 23:53:38.704358] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.956 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:44.956 00:13:44.956 real 0m21.280s 00:13:44.956 user 0m27.522s 00:13:44.956 sys 0m2.777s 00:13:44.956 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.956 23:53:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.956 ************************************ 00:13:44.956 END TEST raid5f_rebuild_test_sb 00:13:44.956 ************************************ 00:13:44.956 23:53:38 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:44.956 23:53:38 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:13:44.956 23:53:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:44.956 23:53:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:44.956 23:53:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.956 ************************************ 00:13:44.956 START TEST raid5f_state_function_test 00:13:44.956 ************************************ 00:13:44.956 23:53:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:13:44.956 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:44.956 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:44.956 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:44.956 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:44.956 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:44.956 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:44.956 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93043 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:44.957 Process raid pid: 93043 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93043' 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93043 00:13:44.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 93043 ']' 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:44.957 23:53:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.216 [2024-11-02 23:53:39.069388] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:13:45.216 [2024-11-02 23:53:39.069598] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.216 [2024-11-02 23:53:39.222015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.216 [2024-11-02 23:53:39.246618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.216 [2024-11-02 23:53:39.287475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.216 [2024-11-02 23:53:39.287515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.172 [2024-11-02 23:53:39.900058] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.172 [2024-11-02 23:53:39.900194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.172 [2024-11-02 23:53:39.900218] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.172 [2024-11-02 23:53:39.900230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.172 [2024-11-02 23:53:39.900236] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.172 [2024-11-02 23:53:39.900247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.172 [2024-11-02 23:53:39.900253] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:46.172 [2024-11-02 23:53:39.900262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.172 "name": "Existed_Raid", 00:13:46.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.172 "strip_size_kb": 64, 00:13:46.172 "state": "configuring", 00:13:46.172 "raid_level": "raid5f", 00:13:46.172 "superblock": false, 00:13:46.172 "num_base_bdevs": 4, 00:13:46.172 "num_base_bdevs_discovered": 0, 00:13:46.172 "num_base_bdevs_operational": 4, 00:13:46.172 "base_bdevs_list": [ 00:13:46.172 { 00:13:46.172 "name": "BaseBdev1", 00:13:46.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.172 "is_configured": false, 00:13:46.172 "data_offset": 0, 00:13:46.172 "data_size": 0 00:13:46.172 }, 00:13:46.172 { 00:13:46.172 "name": "BaseBdev2", 00:13:46.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.172 "is_configured": false, 00:13:46.172 "data_offset": 0, 00:13:46.172 "data_size": 0 00:13:46.172 }, 00:13:46.172 { 00:13:46.172 "name": "BaseBdev3", 00:13:46.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.172 "is_configured": false, 00:13:46.172 "data_offset": 0, 00:13:46.172 "data_size": 0 00:13:46.172 }, 00:13:46.172 { 00:13:46.172 "name": "BaseBdev4", 00:13:46.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.172 "is_configured": false, 00:13:46.172 "data_offset": 0, 00:13:46.172 "data_size": 0 00:13:46.172 } 00:13:46.172 ] 00:13:46.172 }' 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.172 23:53:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.433 [2024-11-02 23:53:40.371145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.433 [2024-11-02 23:53:40.371180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.433 [2024-11-02 23:53:40.383157] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.433 [2024-11-02 23:53:40.383233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.433 [2024-11-02 23:53:40.383245] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.433 [2024-11-02 23:53:40.383254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.433 [2024-11-02 23:53:40.383260] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.433 [2024-11-02 23:53:40.383268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.433 [2024-11-02 23:53:40.383274] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:46.433 [2024-11-02 23:53:40.383282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.433 [2024-11-02 23:53:40.404119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.433 BaseBdev1 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.433 [ 00:13:46.433 { 00:13:46.433 "name": "BaseBdev1", 00:13:46.433 "aliases": [ 00:13:46.433 "6e91bc7c-02f8-4e75-8bf8-a8fc5fb9aaeb" 00:13:46.433 ], 00:13:46.433 "product_name": "Malloc disk", 00:13:46.433 "block_size": 512, 00:13:46.433 "num_blocks": 65536, 00:13:46.433 "uuid": "6e91bc7c-02f8-4e75-8bf8-a8fc5fb9aaeb", 00:13:46.433 "assigned_rate_limits": { 00:13:46.433 "rw_ios_per_sec": 0, 00:13:46.433 "rw_mbytes_per_sec": 0, 00:13:46.433 "r_mbytes_per_sec": 0, 00:13:46.433 "w_mbytes_per_sec": 0 00:13:46.433 }, 00:13:46.433 "claimed": true, 00:13:46.433 "claim_type": "exclusive_write", 00:13:46.433 "zoned": false, 00:13:46.433 "supported_io_types": { 00:13:46.433 "read": true, 00:13:46.433 "write": true, 00:13:46.433 "unmap": true, 00:13:46.433 "flush": true, 00:13:46.433 "reset": true, 00:13:46.433 "nvme_admin": false, 00:13:46.433 "nvme_io": false, 00:13:46.433 "nvme_io_md": false, 00:13:46.433 "write_zeroes": true, 00:13:46.433 "zcopy": true, 00:13:46.433 "get_zone_info": false, 00:13:46.433 "zone_management": false, 00:13:46.433 "zone_append": false, 00:13:46.433 "compare": false, 00:13:46.433 "compare_and_write": false, 00:13:46.433 "abort": true, 00:13:46.433 "seek_hole": false, 00:13:46.433 "seek_data": false, 00:13:46.433 "copy": true, 00:13:46.433 "nvme_iov_md": false 00:13:46.433 }, 00:13:46.433 "memory_domains": [ 00:13:46.433 { 00:13:46.433 "dma_device_id": "system", 00:13:46.433 "dma_device_type": 1 00:13:46.433 }, 00:13:46.433 { 00:13:46.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.433 "dma_device_type": 2 00:13:46.433 } 00:13:46.433 ], 00:13:46.433 "driver_specific": {} 00:13:46.433 } 00:13:46.433 ] 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.433 "name": "Existed_Raid", 00:13:46.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.433 "strip_size_kb": 64, 00:13:46.433 "state": "configuring", 00:13:46.433 "raid_level": "raid5f", 00:13:46.433 "superblock": false, 00:13:46.433 "num_base_bdevs": 4, 00:13:46.433 "num_base_bdevs_discovered": 1, 00:13:46.433 "num_base_bdevs_operational": 4, 00:13:46.433 "base_bdevs_list": [ 00:13:46.433 { 00:13:46.433 "name": "BaseBdev1", 00:13:46.433 "uuid": "6e91bc7c-02f8-4e75-8bf8-a8fc5fb9aaeb", 00:13:46.433 "is_configured": true, 00:13:46.433 "data_offset": 0, 00:13:46.433 "data_size": 65536 00:13:46.433 }, 00:13:46.433 { 00:13:46.433 "name": "BaseBdev2", 00:13:46.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.433 "is_configured": false, 00:13:46.433 "data_offset": 0, 00:13:46.433 "data_size": 0 00:13:46.433 }, 00:13:46.433 { 00:13:46.433 "name": "BaseBdev3", 00:13:46.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.433 "is_configured": false, 00:13:46.433 "data_offset": 0, 00:13:46.433 "data_size": 0 00:13:46.433 }, 00:13:46.433 { 00:13:46.433 "name": "BaseBdev4", 00:13:46.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.433 "is_configured": false, 00:13:46.433 "data_offset": 0, 00:13:46.433 "data_size": 0 00:13:46.433 } 00:13:46.433 ] 00:13:46.433 }' 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.433 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.004 [2024-11-02 23:53:40.867397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:47.004 [2024-11-02 23:53:40.867489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.004 [2024-11-02 23:53:40.879413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.004 [2024-11-02 23:53:40.881229] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:47.004 [2024-11-02 23:53:40.881298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:47.004 [2024-11-02 23:53:40.881342] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:47.004 [2024-11-02 23:53:40.881363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:47.004 [2024-11-02 23:53:40.881381] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:47.004 [2024-11-02 23:53:40.881401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.004 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.004 "name": "Existed_Raid", 00:13:47.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.004 "strip_size_kb": 64, 00:13:47.004 "state": "configuring", 00:13:47.004 "raid_level": "raid5f", 00:13:47.004 "superblock": false, 00:13:47.004 "num_base_bdevs": 4, 00:13:47.004 "num_base_bdevs_discovered": 1, 00:13:47.004 "num_base_bdevs_operational": 4, 00:13:47.004 "base_bdevs_list": [ 00:13:47.004 { 00:13:47.004 "name": "BaseBdev1", 00:13:47.004 "uuid": "6e91bc7c-02f8-4e75-8bf8-a8fc5fb9aaeb", 00:13:47.004 "is_configured": true, 00:13:47.004 "data_offset": 0, 00:13:47.004 "data_size": 65536 00:13:47.004 }, 00:13:47.004 { 00:13:47.004 "name": "BaseBdev2", 00:13:47.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.004 "is_configured": false, 00:13:47.004 "data_offset": 0, 00:13:47.004 "data_size": 0 00:13:47.004 }, 00:13:47.004 { 00:13:47.004 "name": "BaseBdev3", 00:13:47.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.004 "is_configured": false, 00:13:47.004 "data_offset": 0, 00:13:47.004 "data_size": 0 00:13:47.004 }, 00:13:47.004 { 00:13:47.004 "name": "BaseBdev4", 00:13:47.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.004 "is_configured": false, 00:13:47.004 "data_offset": 0, 00:13:47.005 "data_size": 0 00:13:47.005 } 00:13:47.005 ] 00:13:47.005 }' 00:13:47.005 23:53:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.005 23:53:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.271 [2024-11-02 23:53:41.357454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.271 BaseBdev2 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.271 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.536 [ 00:13:47.536 { 00:13:47.536 "name": "BaseBdev2", 00:13:47.536 "aliases": [ 00:13:47.536 "4ad6fb35-1fdb-450d-ad93-b3a51610aa07" 00:13:47.536 ], 00:13:47.536 "product_name": "Malloc disk", 00:13:47.536 "block_size": 512, 00:13:47.536 "num_blocks": 65536, 00:13:47.536 "uuid": "4ad6fb35-1fdb-450d-ad93-b3a51610aa07", 00:13:47.536 "assigned_rate_limits": { 00:13:47.536 "rw_ios_per_sec": 0, 00:13:47.536 "rw_mbytes_per_sec": 0, 00:13:47.536 "r_mbytes_per_sec": 0, 00:13:47.536 "w_mbytes_per_sec": 0 00:13:47.536 }, 00:13:47.536 "claimed": true, 00:13:47.536 "claim_type": "exclusive_write", 00:13:47.536 "zoned": false, 00:13:47.536 "supported_io_types": { 00:13:47.536 "read": true, 00:13:47.536 "write": true, 00:13:47.536 "unmap": true, 00:13:47.536 "flush": true, 00:13:47.536 "reset": true, 00:13:47.536 "nvme_admin": false, 00:13:47.536 "nvme_io": false, 00:13:47.536 "nvme_io_md": false, 00:13:47.536 "write_zeroes": true, 00:13:47.536 "zcopy": true, 00:13:47.536 "get_zone_info": false, 00:13:47.536 "zone_management": false, 00:13:47.536 "zone_append": false, 00:13:47.536 "compare": false, 00:13:47.536 "compare_and_write": false, 00:13:47.536 "abort": true, 00:13:47.536 "seek_hole": false, 00:13:47.536 "seek_data": false, 00:13:47.536 "copy": true, 00:13:47.536 "nvme_iov_md": false 00:13:47.536 }, 00:13:47.536 "memory_domains": [ 00:13:47.536 { 00:13:47.536 "dma_device_id": "system", 00:13:47.536 "dma_device_type": 1 00:13:47.536 }, 00:13:47.536 { 00:13:47.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.536 "dma_device_type": 2 00:13:47.536 } 00:13:47.536 ], 00:13:47.536 "driver_specific": {} 00:13:47.536 } 00:13:47.536 ] 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.536 "name": "Existed_Raid", 00:13:47.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.536 "strip_size_kb": 64, 00:13:47.536 "state": "configuring", 00:13:47.536 "raid_level": "raid5f", 00:13:47.536 "superblock": false, 00:13:47.536 "num_base_bdevs": 4, 00:13:47.536 "num_base_bdevs_discovered": 2, 00:13:47.536 "num_base_bdevs_operational": 4, 00:13:47.536 "base_bdevs_list": [ 00:13:47.536 { 00:13:47.536 "name": "BaseBdev1", 00:13:47.536 "uuid": "6e91bc7c-02f8-4e75-8bf8-a8fc5fb9aaeb", 00:13:47.536 "is_configured": true, 00:13:47.536 "data_offset": 0, 00:13:47.536 "data_size": 65536 00:13:47.536 }, 00:13:47.536 { 00:13:47.536 "name": "BaseBdev2", 00:13:47.536 "uuid": "4ad6fb35-1fdb-450d-ad93-b3a51610aa07", 00:13:47.536 "is_configured": true, 00:13:47.536 "data_offset": 0, 00:13:47.536 "data_size": 65536 00:13:47.536 }, 00:13:47.536 { 00:13:47.536 "name": "BaseBdev3", 00:13:47.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.536 "is_configured": false, 00:13:47.536 "data_offset": 0, 00:13:47.536 "data_size": 0 00:13:47.536 }, 00:13:47.536 { 00:13:47.536 "name": "BaseBdev4", 00:13:47.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.536 "is_configured": false, 00:13:47.536 "data_offset": 0, 00:13:47.536 "data_size": 0 00:13:47.536 } 00:13:47.536 ] 00:13:47.536 }' 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.536 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.796 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:47.796 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.796 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.056 [2024-11-02 23:53:41.890106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.056 BaseBdev3 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.056 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.056 [ 00:13:48.056 { 00:13:48.056 "name": "BaseBdev3", 00:13:48.056 "aliases": [ 00:13:48.057 "a9e771ab-d06c-4ae0-b14e-788bfc52444f" 00:13:48.057 ], 00:13:48.057 "product_name": "Malloc disk", 00:13:48.057 "block_size": 512, 00:13:48.057 "num_blocks": 65536, 00:13:48.057 "uuid": "a9e771ab-d06c-4ae0-b14e-788bfc52444f", 00:13:48.057 "assigned_rate_limits": { 00:13:48.057 "rw_ios_per_sec": 0, 00:13:48.057 "rw_mbytes_per_sec": 0, 00:13:48.057 "r_mbytes_per_sec": 0, 00:13:48.057 "w_mbytes_per_sec": 0 00:13:48.057 }, 00:13:48.057 "claimed": true, 00:13:48.057 "claim_type": "exclusive_write", 00:13:48.057 "zoned": false, 00:13:48.057 "supported_io_types": { 00:13:48.057 "read": true, 00:13:48.057 "write": true, 00:13:48.057 "unmap": true, 00:13:48.057 "flush": true, 00:13:48.057 "reset": true, 00:13:48.057 "nvme_admin": false, 00:13:48.057 "nvme_io": false, 00:13:48.057 "nvme_io_md": false, 00:13:48.057 "write_zeroes": true, 00:13:48.057 "zcopy": true, 00:13:48.057 "get_zone_info": false, 00:13:48.057 "zone_management": false, 00:13:48.057 "zone_append": false, 00:13:48.057 "compare": false, 00:13:48.057 "compare_and_write": false, 00:13:48.057 "abort": true, 00:13:48.057 "seek_hole": false, 00:13:48.057 "seek_data": false, 00:13:48.057 "copy": true, 00:13:48.057 "nvme_iov_md": false 00:13:48.057 }, 00:13:48.057 "memory_domains": [ 00:13:48.057 { 00:13:48.057 "dma_device_id": "system", 00:13:48.057 "dma_device_type": 1 00:13:48.057 }, 00:13:48.057 { 00:13:48.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.057 "dma_device_type": 2 00:13:48.057 } 00:13:48.057 ], 00:13:48.057 "driver_specific": {} 00:13:48.057 } 00:13:48.057 ] 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.057 "name": "Existed_Raid", 00:13:48.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.057 "strip_size_kb": 64, 00:13:48.057 "state": "configuring", 00:13:48.057 "raid_level": "raid5f", 00:13:48.057 "superblock": false, 00:13:48.057 "num_base_bdevs": 4, 00:13:48.057 "num_base_bdevs_discovered": 3, 00:13:48.057 "num_base_bdevs_operational": 4, 00:13:48.057 "base_bdevs_list": [ 00:13:48.057 { 00:13:48.057 "name": "BaseBdev1", 00:13:48.057 "uuid": "6e91bc7c-02f8-4e75-8bf8-a8fc5fb9aaeb", 00:13:48.057 "is_configured": true, 00:13:48.057 "data_offset": 0, 00:13:48.057 "data_size": 65536 00:13:48.057 }, 00:13:48.057 { 00:13:48.057 "name": "BaseBdev2", 00:13:48.057 "uuid": "4ad6fb35-1fdb-450d-ad93-b3a51610aa07", 00:13:48.057 "is_configured": true, 00:13:48.057 "data_offset": 0, 00:13:48.057 "data_size": 65536 00:13:48.057 }, 00:13:48.057 { 00:13:48.057 "name": "BaseBdev3", 00:13:48.057 "uuid": "a9e771ab-d06c-4ae0-b14e-788bfc52444f", 00:13:48.057 "is_configured": true, 00:13:48.057 "data_offset": 0, 00:13:48.057 "data_size": 65536 00:13:48.057 }, 00:13:48.057 { 00:13:48.057 "name": "BaseBdev4", 00:13:48.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.057 "is_configured": false, 00:13:48.057 "data_offset": 0, 00:13:48.057 "data_size": 0 00:13:48.057 } 00:13:48.057 ] 00:13:48.057 }' 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.057 23:53:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.317 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:48.317 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.317 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.318 [2024-11-02 23:53:42.372109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:48.318 [2024-11-02 23:53:42.372233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:48.318 [2024-11-02 23:53:42.372246] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:48.318 [2024-11-02 23:53:42.372537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:48.318 [2024-11-02 23:53:42.373010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:48.318 [2024-11-02 23:53:42.373035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:48.318 [2024-11-02 23:53:42.373238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.318 BaseBdev4 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.318 [ 00:13:48.318 { 00:13:48.318 "name": "BaseBdev4", 00:13:48.318 "aliases": [ 00:13:48.318 "4d5ab688-86fa-4037-99c3-bde7ef91123e" 00:13:48.318 ], 00:13:48.318 "product_name": "Malloc disk", 00:13:48.318 "block_size": 512, 00:13:48.318 "num_blocks": 65536, 00:13:48.318 "uuid": "4d5ab688-86fa-4037-99c3-bde7ef91123e", 00:13:48.318 "assigned_rate_limits": { 00:13:48.318 "rw_ios_per_sec": 0, 00:13:48.318 "rw_mbytes_per_sec": 0, 00:13:48.318 "r_mbytes_per_sec": 0, 00:13:48.318 "w_mbytes_per_sec": 0 00:13:48.318 }, 00:13:48.318 "claimed": true, 00:13:48.318 "claim_type": "exclusive_write", 00:13:48.318 "zoned": false, 00:13:48.318 "supported_io_types": { 00:13:48.318 "read": true, 00:13:48.318 "write": true, 00:13:48.318 "unmap": true, 00:13:48.318 "flush": true, 00:13:48.318 "reset": true, 00:13:48.318 "nvme_admin": false, 00:13:48.318 "nvme_io": false, 00:13:48.318 "nvme_io_md": false, 00:13:48.318 "write_zeroes": true, 00:13:48.318 "zcopy": true, 00:13:48.318 "get_zone_info": false, 00:13:48.318 "zone_management": false, 00:13:48.318 "zone_append": false, 00:13:48.318 "compare": false, 00:13:48.318 "compare_and_write": false, 00:13:48.318 "abort": true, 00:13:48.318 "seek_hole": false, 00:13:48.318 "seek_data": false, 00:13:48.318 "copy": true, 00:13:48.318 "nvme_iov_md": false 00:13:48.318 }, 00:13:48.318 "memory_domains": [ 00:13:48.318 { 00:13:48.318 "dma_device_id": "system", 00:13:48.318 "dma_device_type": 1 00:13:48.318 }, 00:13:48.318 { 00:13:48.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.318 "dma_device_type": 2 00:13:48.318 } 00:13:48.318 ], 00:13:48.318 "driver_specific": {} 00:13:48.318 } 00:13:48.318 ] 00:13:48.318 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.578 "name": "Existed_Raid", 00:13:48.578 "uuid": "218d2431-42ae-4c45-921a-6a6d15309f95", 00:13:48.578 "strip_size_kb": 64, 00:13:48.578 "state": "online", 00:13:48.578 "raid_level": "raid5f", 00:13:48.578 "superblock": false, 00:13:48.578 "num_base_bdevs": 4, 00:13:48.578 "num_base_bdevs_discovered": 4, 00:13:48.578 "num_base_bdevs_operational": 4, 00:13:48.578 "base_bdevs_list": [ 00:13:48.578 { 00:13:48.578 "name": "BaseBdev1", 00:13:48.578 "uuid": "6e91bc7c-02f8-4e75-8bf8-a8fc5fb9aaeb", 00:13:48.578 "is_configured": true, 00:13:48.578 "data_offset": 0, 00:13:48.578 "data_size": 65536 00:13:48.578 }, 00:13:48.578 { 00:13:48.578 "name": "BaseBdev2", 00:13:48.578 "uuid": "4ad6fb35-1fdb-450d-ad93-b3a51610aa07", 00:13:48.578 "is_configured": true, 00:13:48.578 "data_offset": 0, 00:13:48.578 "data_size": 65536 00:13:48.578 }, 00:13:48.578 { 00:13:48.578 "name": "BaseBdev3", 00:13:48.578 "uuid": "a9e771ab-d06c-4ae0-b14e-788bfc52444f", 00:13:48.578 "is_configured": true, 00:13:48.578 "data_offset": 0, 00:13:48.578 "data_size": 65536 00:13:48.578 }, 00:13:48.578 { 00:13:48.578 "name": "BaseBdev4", 00:13:48.578 "uuid": "4d5ab688-86fa-4037-99c3-bde7ef91123e", 00:13:48.578 "is_configured": true, 00:13:48.578 "data_offset": 0, 00:13:48.578 "data_size": 65536 00:13:48.578 } 00:13:48.578 ] 00:13:48.578 }' 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.578 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.838 [2024-11-02 23:53:42.847524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.838 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.839 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:48.839 "name": "Existed_Raid", 00:13:48.839 "aliases": [ 00:13:48.839 "218d2431-42ae-4c45-921a-6a6d15309f95" 00:13:48.839 ], 00:13:48.839 "product_name": "Raid Volume", 00:13:48.839 "block_size": 512, 00:13:48.839 "num_blocks": 196608, 00:13:48.839 "uuid": "218d2431-42ae-4c45-921a-6a6d15309f95", 00:13:48.839 "assigned_rate_limits": { 00:13:48.839 "rw_ios_per_sec": 0, 00:13:48.839 "rw_mbytes_per_sec": 0, 00:13:48.839 "r_mbytes_per_sec": 0, 00:13:48.839 "w_mbytes_per_sec": 0 00:13:48.839 }, 00:13:48.839 "claimed": false, 00:13:48.839 "zoned": false, 00:13:48.839 "supported_io_types": { 00:13:48.839 "read": true, 00:13:48.839 "write": true, 00:13:48.839 "unmap": false, 00:13:48.839 "flush": false, 00:13:48.839 "reset": true, 00:13:48.839 "nvme_admin": false, 00:13:48.839 "nvme_io": false, 00:13:48.839 "nvme_io_md": false, 00:13:48.839 "write_zeroes": true, 00:13:48.839 "zcopy": false, 00:13:48.839 "get_zone_info": false, 00:13:48.839 "zone_management": false, 00:13:48.839 "zone_append": false, 00:13:48.839 "compare": false, 00:13:48.839 "compare_and_write": false, 00:13:48.839 "abort": false, 00:13:48.839 "seek_hole": false, 00:13:48.839 "seek_data": false, 00:13:48.839 "copy": false, 00:13:48.839 "nvme_iov_md": false 00:13:48.839 }, 00:13:48.839 "driver_specific": { 00:13:48.839 "raid": { 00:13:48.839 "uuid": "218d2431-42ae-4c45-921a-6a6d15309f95", 00:13:48.839 "strip_size_kb": 64, 00:13:48.839 "state": "online", 00:13:48.839 "raid_level": "raid5f", 00:13:48.839 "superblock": false, 00:13:48.839 "num_base_bdevs": 4, 00:13:48.839 "num_base_bdevs_discovered": 4, 00:13:48.839 "num_base_bdevs_operational": 4, 00:13:48.839 "base_bdevs_list": [ 00:13:48.839 { 00:13:48.839 "name": "BaseBdev1", 00:13:48.839 "uuid": "6e91bc7c-02f8-4e75-8bf8-a8fc5fb9aaeb", 00:13:48.839 "is_configured": true, 00:13:48.839 "data_offset": 0, 00:13:48.839 "data_size": 65536 00:13:48.839 }, 00:13:48.839 { 00:13:48.839 "name": "BaseBdev2", 00:13:48.839 "uuid": "4ad6fb35-1fdb-450d-ad93-b3a51610aa07", 00:13:48.839 "is_configured": true, 00:13:48.839 "data_offset": 0, 00:13:48.839 "data_size": 65536 00:13:48.839 }, 00:13:48.839 { 00:13:48.839 "name": "BaseBdev3", 00:13:48.839 "uuid": "a9e771ab-d06c-4ae0-b14e-788bfc52444f", 00:13:48.839 "is_configured": true, 00:13:48.839 "data_offset": 0, 00:13:48.839 "data_size": 65536 00:13:48.839 }, 00:13:48.839 { 00:13:48.839 "name": "BaseBdev4", 00:13:48.839 "uuid": "4d5ab688-86fa-4037-99c3-bde7ef91123e", 00:13:48.839 "is_configured": true, 00:13:48.839 "data_offset": 0, 00:13:48.839 "data_size": 65536 00:13:48.839 } 00:13:48.839 ] 00:13:48.839 } 00:13:48.839 } 00:13:48.839 }' 00:13:48.839 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:48.839 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:48.839 BaseBdev2 00:13:48.839 BaseBdev3 00:13:48.839 BaseBdev4' 00:13:48.839 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.099 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.099 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.099 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:49.099 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.099 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.099 23:53:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.099 23:53:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.099 [2024-11-02 23:53:43.158859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.099 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.100 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.360 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.360 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.360 "name": "Existed_Raid", 00:13:49.360 "uuid": "218d2431-42ae-4c45-921a-6a6d15309f95", 00:13:49.360 "strip_size_kb": 64, 00:13:49.360 "state": "online", 00:13:49.360 "raid_level": "raid5f", 00:13:49.360 "superblock": false, 00:13:49.360 "num_base_bdevs": 4, 00:13:49.360 "num_base_bdevs_discovered": 3, 00:13:49.360 "num_base_bdevs_operational": 3, 00:13:49.360 "base_bdevs_list": [ 00:13:49.360 { 00:13:49.360 "name": null, 00:13:49.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.360 "is_configured": false, 00:13:49.360 "data_offset": 0, 00:13:49.360 "data_size": 65536 00:13:49.360 }, 00:13:49.360 { 00:13:49.360 "name": "BaseBdev2", 00:13:49.360 "uuid": "4ad6fb35-1fdb-450d-ad93-b3a51610aa07", 00:13:49.360 "is_configured": true, 00:13:49.360 "data_offset": 0, 00:13:49.360 "data_size": 65536 00:13:49.360 }, 00:13:49.360 { 00:13:49.360 "name": "BaseBdev3", 00:13:49.360 "uuid": "a9e771ab-d06c-4ae0-b14e-788bfc52444f", 00:13:49.360 "is_configured": true, 00:13:49.360 "data_offset": 0, 00:13:49.360 "data_size": 65536 00:13:49.360 }, 00:13:49.360 { 00:13:49.360 "name": "BaseBdev4", 00:13:49.360 "uuid": "4d5ab688-86fa-4037-99c3-bde7ef91123e", 00:13:49.360 "is_configured": true, 00:13:49.360 "data_offset": 0, 00:13:49.360 "data_size": 65536 00:13:49.360 } 00:13:49.360 ] 00:13:49.360 }' 00:13:49.360 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.360 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.620 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.620 [2024-11-02 23:53:43.709310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.620 [2024-11-02 23:53:43.709405] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.881 [2024-11-02 23:53:43.720719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.881 [2024-11-02 23:53:43.780624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:49.881 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 [2024-11-02 23:53:43.851590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:49.882 [2024-11-02 23:53:43.851688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 BaseBdev2 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 [ 00:13:49.882 { 00:13:49.882 "name": "BaseBdev2", 00:13:49.882 "aliases": [ 00:13:49.882 "8c0aa643-3a0d-4f32-9318-bf3a17da18fe" 00:13:49.882 ], 00:13:49.882 "product_name": "Malloc disk", 00:13:49.882 "block_size": 512, 00:13:49.882 "num_blocks": 65536, 00:13:49.882 "uuid": "8c0aa643-3a0d-4f32-9318-bf3a17da18fe", 00:13:49.882 "assigned_rate_limits": { 00:13:49.882 "rw_ios_per_sec": 0, 00:13:49.882 "rw_mbytes_per_sec": 0, 00:13:49.882 "r_mbytes_per_sec": 0, 00:13:49.882 "w_mbytes_per_sec": 0 00:13:49.882 }, 00:13:49.882 "claimed": false, 00:13:49.882 "zoned": false, 00:13:49.882 "supported_io_types": { 00:13:49.882 "read": true, 00:13:49.882 "write": true, 00:13:49.882 "unmap": true, 00:13:49.882 "flush": true, 00:13:49.882 "reset": true, 00:13:49.882 "nvme_admin": false, 00:13:49.882 "nvme_io": false, 00:13:49.882 "nvme_io_md": false, 00:13:49.882 "write_zeroes": true, 00:13:49.882 "zcopy": true, 00:13:49.882 "get_zone_info": false, 00:13:49.882 "zone_management": false, 00:13:49.882 "zone_append": false, 00:13:49.882 "compare": false, 00:13:49.882 "compare_and_write": false, 00:13:49.882 "abort": true, 00:13:49.882 "seek_hole": false, 00:13:49.882 "seek_data": false, 00:13:49.882 "copy": true, 00:13:49.882 "nvme_iov_md": false 00:13:49.882 }, 00:13:49.882 "memory_domains": [ 00:13:49.882 { 00:13:49.882 "dma_device_id": "system", 00:13:49.882 "dma_device_type": 1 00:13:49.882 }, 00:13:49.882 { 00:13:49.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.882 "dma_device_type": 2 00:13:49.882 } 00:13:49.882 ], 00:13:49.882 "driver_specific": {} 00:13:49.882 } 00:13:49.882 ] 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 BaseBdev3 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:49.882 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:50.142 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:50.142 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:50.142 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:50.142 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.142 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.142 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.142 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:50.142 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.142 23:53:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.142 [ 00:13:50.142 { 00:13:50.142 "name": "BaseBdev3", 00:13:50.142 "aliases": [ 00:13:50.142 "5058d3e7-3887-4953-bbeb-bd3877b55367" 00:13:50.142 ], 00:13:50.142 "product_name": "Malloc disk", 00:13:50.142 "block_size": 512, 00:13:50.142 "num_blocks": 65536, 00:13:50.142 "uuid": "5058d3e7-3887-4953-bbeb-bd3877b55367", 00:13:50.142 "assigned_rate_limits": { 00:13:50.142 "rw_ios_per_sec": 0, 00:13:50.142 "rw_mbytes_per_sec": 0, 00:13:50.142 "r_mbytes_per_sec": 0, 00:13:50.142 "w_mbytes_per_sec": 0 00:13:50.142 }, 00:13:50.142 "claimed": false, 00:13:50.142 "zoned": false, 00:13:50.142 "supported_io_types": { 00:13:50.142 "read": true, 00:13:50.142 "write": true, 00:13:50.142 "unmap": true, 00:13:50.142 "flush": true, 00:13:50.143 "reset": true, 00:13:50.143 "nvme_admin": false, 00:13:50.143 "nvme_io": false, 00:13:50.143 "nvme_io_md": false, 00:13:50.143 "write_zeroes": true, 00:13:50.143 "zcopy": true, 00:13:50.143 "get_zone_info": false, 00:13:50.143 "zone_management": false, 00:13:50.143 "zone_append": false, 00:13:50.143 "compare": false, 00:13:50.143 "compare_and_write": false, 00:13:50.143 "abort": true, 00:13:50.143 "seek_hole": false, 00:13:50.143 "seek_data": false, 00:13:50.143 "copy": true, 00:13:50.143 "nvme_iov_md": false 00:13:50.143 }, 00:13:50.143 "memory_domains": [ 00:13:50.143 { 00:13:50.143 "dma_device_id": "system", 00:13:50.143 "dma_device_type": 1 00:13:50.143 }, 00:13:50.143 { 00:13:50.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.143 "dma_device_type": 2 00:13:50.143 } 00:13:50.143 ], 00:13:50.143 "driver_specific": {} 00:13:50.143 } 00:13:50.143 ] 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.143 BaseBdev4 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.143 [ 00:13:50.143 { 00:13:50.143 "name": "BaseBdev4", 00:13:50.143 "aliases": [ 00:13:50.143 "422141b4-02f1-4427-a160-11cfae8aa0ab" 00:13:50.143 ], 00:13:50.143 "product_name": "Malloc disk", 00:13:50.143 "block_size": 512, 00:13:50.143 "num_blocks": 65536, 00:13:50.143 "uuid": "422141b4-02f1-4427-a160-11cfae8aa0ab", 00:13:50.143 "assigned_rate_limits": { 00:13:50.143 "rw_ios_per_sec": 0, 00:13:50.143 "rw_mbytes_per_sec": 0, 00:13:50.143 "r_mbytes_per_sec": 0, 00:13:50.143 "w_mbytes_per_sec": 0 00:13:50.143 }, 00:13:50.143 "claimed": false, 00:13:50.143 "zoned": false, 00:13:50.143 "supported_io_types": { 00:13:50.143 "read": true, 00:13:50.143 "write": true, 00:13:50.143 "unmap": true, 00:13:50.143 "flush": true, 00:13:50.143 "reset": true, 00:13:50.143 "nvme_admin": false, 00:13:50.143 "nvme_io": false, 00:13:50.143 "nvme_io_md": false, 00:13:50.143 "write_zeroes": true, 00:13:50.143 "zcopy": true, 00:13:50.143 "get_zone_info": false, 00:13:50.143 "zone_management": false, 00:13:50.143 "zone_append": false, 00:13:50.143 "compare": false, 00:13:50.143 "compare_and_write": false, 00:13:50.143 "abort": true, 00:13:50.143 "seek_hole": false, 00:13:50.143 "seek_data": false, 00:13:50.143 "copy": true, 00:13:50.143 "nvme_iov_md": false 00:13:50.143 }, 00:13:50.143 "memory_domains": [ 00:13:50.143 { 00:13:50.143 "dma_device_id": "system", 00:13:50.143 "dma_device_type": 1 00:13:50.143 }, 00:13:50.143 { 00:13:50.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.143 "dma_device_type": 2 00:13:50.143 } 00:13:50.143 ], 00:13:50.143 "driver_specific": {} 00:13:50.143 } 00:13:50.143 ] 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.143 [2024-11-02 23:53:44.075229] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.143 [2024-11-02 23:53:44.075311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.143 [2024-11-02 23:53:44.075383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.143 [2024-11-02 23:53:44.077156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.143 [2024-11-02 23:53:44.077238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.143 "name": "Existed_Raid", 00:13:50.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.143 "strip_size_kb": 64, 00:13:50.143 "state": "configuring", 00:13:50.143 "raid_level": "raid5f", 00:13:50.143 "superblock": false, 00:13:50.143 "num_base_bdevs": 4, 00:13:50.143 "num_base_bdevs_discovered": 3, 00:13:50.143 "num_base_bdevs_operational": 4, 00:13:50.143 "base_bdevs_list": [ 00:13:50.143 { 00:13:50.143 "name": "BaseBdev1", 00:13:50.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.143 "is_configured": false, 00:13:50.143 "data_offset": 0, 00:13:50.143 "data_size": 0 00:13:50.143 }, 00:13:50.143 { 00:13:50.143 "name": "BaseBdev2", 00:13:50.143 "uuid": "8c0aa643-3a0d-4f32-9318-bf3a17da18fe", 00:13:50.143 "is_configured": true, 00:13:50.143 "data_offset": 0, 00:13:50.143 "data_size": 65536 00:13:50.143 }, 00:13:50.143 { 00:13:50.143 "name": "BaseBdev3", 00:13:50.143 "uuid": "5058d3e7-3887-4953-bbeb-bd3877b55367", 00:13:50.143 "is_configured": true, 00:13:50.143 "data_offset": 0, 00:13:50.143 "data_size": 65536 00:13:50.143 }, 00:13:50.143 { 00:13:50.143 "name": "BaseBdev4", 00:13:50.143 "uuid": "422141b4-02f1-4427-a160-11cfae8aa0ab", 00:13:50.143 "is_configured": true, 00:13:50.143 "data_offset": 0, 00:13:50.143 "data_size": 65536 00:13:50.143 } 00:13:50.143 ] 00:13:50.143 }' 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.143 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.404 [2024-11-02 23:53:44.482600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.404 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.664 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.664 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.664 "name": "Existed_Raid", 00:13:50.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.664 "strip_size_kb": 64, 00:13:50.664 "state": "configuring", 00:13:50.664 "raid_level": "raid5f", 00:13:50.664 "superblock": false, 00:13:50.664 "num_base_bdevs": 4, 00:13:50.664 "num_base_bdevs_discovered": 2, 00:13:50.664 "num_base_bdevs_operational": 4, 00:13:50.664 "base_bdevs_list": [ 00:13:50.664 { 00:13:50.664 "name": "BaseBdev1", 00:13:50.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.664 "is_configured": false, 00:13:50.664 "data_offset": 0, 00:13:50.664 "data_size": 0 00:13:50.664 }, 00:13:50.664 { 00:13:50.664 "name": null, 00:13:50.664 "uuid": "8c0aa643-3a0d-4f32-9318-bf3a17da18fe", 00:13:50.664 "is_configured": false, 00:13:50.664 "data_offset": 0, 00:13:50.664 "data_size": 65536 00:13:50.664 }, 00:13:50.664 { 00:13:50.664 "name": "BaseBdev3", 00:13:50.664 "uuid": "5058d3e7-3887-4953-bbeb-bd3877b55367", 00:13:50.664 "is_configured": true, 00:13:50.664 "data_offset": 0, 00:13:50.664 "data_size": 65536 00:13:50.664 }, 00:13:50.664 { 00:13:50.664 "name": "BaseBdev4", 00:13:50.664 "uuid": "422141b4-02f1-4427-a160-11cfae8aa0ab", 00:13:50.664 "is_configured": true, 00:13:50.664 "data_offset": 0, 00:13:50.664 "data_size": 65536 00:13:50.664 } 00:13:50.664 ] 00:13:50.664 }' 00:13:50.664 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.664 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.925 [2024-11-02 23:53:44.964607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.925 BaseBdev1 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.925 23:53:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.925 [ 00:13:50.925 { 00:13:50.925 "name": "BaseBdev1", 00:13:50.925 "aliases": [ 00:13:50.925 "8cc620b3-3dfb-457b-8241-1da1dde6864f" 00:13:50.925 ], 00:13:50.925 "product_name": "Malloc disk", 00:13:50.925 "block_size": 512, 00:13:50.925 "num_blocks": 65536, 00:13:50.925 "uuid": "8cc620b3-3dfb-457b-8241-1da1dde6864f", 00:13:50.925 "assigned_rate_limits": { 00:13:50.925 "rw_ios_per_sec": 0, 00:13:50.925 "rw_mbytes_per_sec": 0, 00:13:50.925 "r_mbytes_per_sec": 0, 00:13:50.925 "w_mbytes_per_sec": 0 00:13:50.925 }, 00:13:50.925 "claimed": true, 00:13:50.925 "claim_type": "exclusive_write", 00:13:50.925 "zoned": false, 00:13:50.925 "supported_io_types": { 00:13:50.925 "read": true, 00:13:50.925 "write": true, 00:13:50.925 "unmap": true, 00:13:50.925 "flush": true, 00:13:50.925 "reset": true, 00:13:50.925 "nvme_admin": false, 00:13:50.925 "nvme_io": false, 00:13:50.925 "nvme_io_md": false, 00:13:50.925 "write_zeroes": true, 00:13:50.925 "zcopy": true, 00:13:50.925 "get_zone_info": false, 00:13:50.925 "zone_management": false, 00:13:50.925 "zone_append": false, 00:13:50.925 "compare": false, 00:13:50.925 "compare_and_write": false, 00:13:50.925 "abort": true, 00:13:50.925 "seek_hole": false, 00:13:50.925 "seek_data": false, 00:13:50.925 "copy": true, 00:13:50.925 "nvme_iov_md": false 00:13:50.925 }, 00:13:50.925 "memory_domains": [ 00:13:50.925 { 00:13:50.925 "dma_device_id": "system", 00:13:50.925 "dma_device_type": 1 00:13:50.925 }, 00:13:50.925 { 00:13:50.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.925 "dma_device_type": 2 00:13:50.925 } 00:13:50.925 ], 00:13:50.925 "driver_specific": {} 00:13:50.925 } 00:13:50.925 ] 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.926 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.186 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.186 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.186 "name": "Existed_Raid", 00:13:51.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.186 "strip_size_kb": 64, 00:13:51.186 "state": "configuring", 00:13:51.186 "raid_level": "raid5f", 00:13:51.186 "superblock": false, 00:13:51.186 "num_base_bdevs": 4, 00:13:51.186 "num_base_bdevs_discovered": 3, 00:13:51.186 "num_base_bdevs_operational": 4, 00:13:51.186 "base_bdevs_list": [ 00:13:51.186 { 00:13:51.186 "name": "BaseBdev1", 00:13:51.186 "uuid": "8cc620b3-3dfb-457b-8241-1da1dde6864f", 00:13:51.186 "is_configured": true, 00:13:51.186 "data_offset": 0, 00:13:51.186 "data_size": 65536 00:13:51.186 }, 00:13:51.186 { 00:13:51.186 "name": null, 00:13:51.186 "uuid": "8c0aa643-3a0d-4f32-9318-bf3a17da18fe", 00:13:51.186 "is_configured": false, 00:13:51.186 "data_offset": 0, 00:13:51.186 "data_size": 65536 00:13:51.186 }, 00:13:51.186 { 00:13:51.186 "name": "BaseBdev3", 00:13:51.186 "uuid": "5058d3e7-3887-4953-bbeb-bd3877b55367", 00:13:51.186 "is_configured": true, 00:13:51.186 "data_offset": 0, 00:13:51.186 "data_size": 65536 00:13:51.186 }, 00:13:51.186 { 00:13:51.186 "name": "BaseBdev4", 00:13:51.186 "uuid": "422141b4-02f1-4427-a160-11cfae8aa0ab", 00:13:51.186 "is_configured": true, 00:13:51.186 "data_offset": 0, 00:13:51.186 "data_size": 65536 00:13:51.186 } 00:13:51.186 ] 00:13:51.186 }' 00:13:51.186 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.186 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.449 [2024-11-02 23:53:45.459804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.449 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.449 "name": "Existed_Raid", 00:13:51.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.449 "strip_size_kb": 64, 00:13:51.449 "state": "configuring", 00:13:51.449 "raid_level": "raid5f", 00:13:51.449 "superblock": false, 00:13:51.449 "num_base_bdevs": 4, 00:13:51.449 "num_base_bdevs_discovered": 2, 00:13:51.449 "num_base_bdevs_operational": 4, 00:13:51.449 "base_bdevs_list": [ 00:13:51.449 { 00:13:51.449 "name": "BaseBdev1", 00:13:51.449 "uuid": "8cc620b3-3dfb-457b-8241-1da1dde6864f", 00:13:51.449 "is_configured": true, 00:13:51.449 "data_offset": 0, 00:13:51.449 "data_size": 65536 00:13:51.449 }, 00:13:51.449 { 00:13:51.449 "name": null, 00:13:51.449 "uuid": "8c0aa643-3a0d-4f32-9318-bf3a17da18fe", 00:13:51.449 "is_configured": false, 00:13:51.449 "data_offset": 0, 00:13:51.449 "data_size": 65536 00:13:51.449 }, 00:13:51.449 { 00:13:51.449 "name": null, 00:13:51.449 "uuid": "5058d3e7-3887-4953-bbeb-bd3877b55367", 00:13:51.449 "is_configured": false, 00:13:51.449 "data_offset": 0, 00:13:51.449 "data_size": 65536 00:13:51.449 }, 00:13:51.450 { 00:13:51.450 "name": "BaseBdev4", 00:13:51.450 "uuid": "422141b4-02f1-4427-a160-11cfae8aa0ab", 00:13:51.450 "is_configured": true, 00:13:51.450 "data_offset": 0, 00:13:51.450 "data_size": 65536 00:13:51.450 } 00:13:51.450 ] 00:13:51.450 }' 00:13:51.450 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.450 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.018 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:52.018 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.018 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.018 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.019 [2024-11-02 23:53:45.875087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.019 "name": "Existed_Raid", 00:13:52.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.019 "strip_size_kb": 64, 00:13:52.019 "state": "configuring", 00:13:52.019 "raid_level": "raid5f", 00:13:52.019 "superblock": false, 00:13:52.019 "num_base_bdevs": 4, 00:13:52.019 "num_base_bdevs_discovered": 3, 00:13:52.019 "num_base_bdevs_operational": 4, 00:13:52.019 "base_bdevs_list": [ 00:13:52.019 { 00:13:52.019 "name": "BaseBdev1", 00:13:52.019 "uuid": "8cc620b3-3dfb-457b-8241-1da1dde6864f", 00:13:52.019 "is_configured": true, 00:13:52.019 "data_offset": 0, 00:13:52.019 "data_size": 65536 00:13:52.019 }, 00:13:52.019 { 00:13:52.019 "name": null, 00:13:52.019 "uuid": "8c0aa643-3a0d-4f32-9318-bf3a17da18fe", 00:13:52.019 "is_configured": false, 00:13:52.019 "data_offset": 0, 00:13:52.019 "data_size": 65536 00:13:52.019 }, 00:13:52.019 { 00:13:52.019 "name": "BaseBdev3", 00:13:52.019 "uuid": "5058d3e7-3887-4953-bbeb-bd3877b55367", 00:13:52.019 "is_configured": true, 00:13:52.019 "data_offset": 0, 00:13:52.019 "data_size": 65536 00:13:52.019 }, 00:13:52.019 { 00:13:52.019 "name": "BaseBdev4", 00:13:52.019 "uuid": "422141b4-02f1-4427-a160-11cfae8aa0ab", 00:13:52.019 "is_configured": true, 00:13:52.019 "data_offset": 0, 00:13:52.019 "data_size": 65536 00:13:52.019 } 00:13:52.019 ] 00:13:52.019 }' 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.019 23:53:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 [2024-11-02 23:53:46.342342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.278 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.538 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.538 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.538 "name": "Existed_Raid", 00:13:52.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.538 "strip_size_kb": 64, 00:13:52.538 "state": "configuring", 00:13:52.538 "raid_level": "raid5f", 00:13:52.538 "superblock": false, 00:13:52.538 "num_base_bdevs": 4, 00:13:52.538 "num_base_bdevs_discovered": 2, 00:13:52.538 "num_base_bdevs_operational": 4, 00:13:52.538 "base_bdevs_list": [ 00:13:52.538 { 00:13:52.538 "name": null, 00:13:52.538 "uuid": "8cc620b3-3dfb-457b-8241-1da1dde6864f", 00:13:52.538 "is_configured": false, 00:13:52.538 "data_offset": 0, 00:13:52.538 "data_size": 65536 00:13:52.538 }, 00:13:52.538 { 00:13:52.538 "name": null, 00:13:52.539 "uuid": "8c0aa643-3a0d-4f32-9318-bf3a17da18fe", 00:13:52.539 "is_configured": false, 00:13:52.539 "data_offset": 0, 00:13:52.539 "data_size": 65536 00:13:52.539 }, 00:13:52.539 { 00:13:52.539 "name": "BaseBdev3", 00:13:52.539 "uuid": "5058d3e7-3887-4953-bbeb-bd3877b55367", 00:13:52.539 "is_configured": true, 00:13:52.539 "data_offset": 0, 00:13:52.539 "data_size": 65536 00:13:52.539 }, 00:13:52.539 { 00:13:52.539 "name": "BaseBdev4", 00:13:52.539 "uuid": "422141b4-02f1-4427-a160-11cfae8aa0ab", 00:13:52.539 "is_configured": true, 00:13:52.539 "data_offset": 0, 00:13:52.539 "data_size": 65536 00:13:52.539 } 00:13:52.539 ] 00:13:52.539 }' 00:13:52.539 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.539 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.799 [2024-11-02 23:53:46.875770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.799 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.059 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.059 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.059 "name": "Existed_Raid", 00:13:53.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.059 "strip_size_kb": 64, 00:13:53.059 "state": "configuring", 00:13:53.059 "raid_level": "raid5f", 00:13:53.059 "superblock": false, 00:13:53.059 "num_base_bdevs": 4, 00:13:53.059 "num_base_bdevs_discovered": 3, 00:13:53.059 "num_base_bdevs_operational": 4, 00:13:53.059 "base_bdevs_list": [ 00:13:53.059 { 00:13:53.059 "name": null, 00:13:53.059 "uuid": "8cc620b3-3dfb-457b-8241-1da1dde6864f", 00:13:53.059 "is_configured": false, 00:13:53.059 "data_offset": 0, 00:13:53.059 "data_size": 65536 00:13:53.059 }, 00:13:53.059 { 00:13:53.059 "name": "BaseBdev2", 00:13:53.059 "uuid": "8c0aa643-3a0d-4f32-9318-bf3a17da18fe", 00:13:53.059 "is_configured": true, 00:13:53.060 "data_offset": 0, 00:13:53.060 "data_size": 65536 00:13:53.060 }, 00:13:53.060 { 00:13:53.060 "name": "BaseBdev3", 00:13:53.060 "uuid": "5058d3e7-3887-4953-bbeb-bd3877b55367", 00:13:53.060 "is_configured": true, 00:13:53.060 "data_offset": 0, 00:13:53.060 "data_size": 65536 00:13:53.060 }, 00:13:53.060 { 00:13:53.060 "name": "BaseBdev4", 00:13:53.060 "uuid": "422141b4-02f1-4427-a160-11cfae8aa0ab", 00:13:53.060 "is_configured": true, 00:13:53.060 "data_offset": 0, 00:13:53.060 "data_size": 65536 00:13:53.060 } 00:13:53.060 ] 00:13:53.060 }' 00:13:53.060 23:53:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.060 23:53:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8cc620b3-3dfb-457b-8241-1da1dde6864f 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.321 [2024-11-02 23:53:47.357823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:53.321 [2024-11-02 23:53:47.357870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:53.321 [2024-11-02 23:53:47.357877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:53.321 [2024-11-02 23:53:47.358140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:53.321 [2024-11-02 23:53:47.358583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:53.321 [2024-11-02 23:53:47.358614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:53.321 [2024-11-02 23:53:47.358842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.321 NewBaseBdev 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.321 [ 00:13:53.321 { 00:13:53.321 "name": "NewBaseBdev", 00:13:53.321 "aliases": [ 00:13:53.321 "8cc620b3-3dfb-457b-8241-1da1dde6864f" 00:13:53.321 ], 00:13:53.321 "product_name": "Malloc disk", 00:13:53.321 "block_size": 512, 00:13:53.321 "num_blocks": 65536, 00:13:53.321 "uuid": "8cc620b3-3dfb-457b-8241-1da1dde6864f", 00:13:53.321 "assigned_rate_limits": { 00:13:53.321 "rw_ios_per_sec": 0, 00:13:53.321 "rw_mbytes_per_sec": 0, 00:13:53.321 "r_mbytes_per_sec": 0, 00:13:53.321 "w_mbytes_per_sec": 0 00:13:53.321 }, 00:13:53.321 "claimed": true, 00:13:53.321 "claim_type": "exclusive_write", 00:13:53.321 "zoned": false, 00:13:53.321 "supported_io_types": { 00:13:53.321 "read": true, 00:13:53.321 "write": true, 00:13:53.321 "unmap": true, 00:13:53.321 "flush": true, 00:13:53.321 "reset": true, 00:13:53.321 "nvme_admin": false, 00:13:53.321 "nvme_io": false, 00:13:53.321 "nvme_io_md": false, 00:13:53.321 "write_zeroes": true, 00:13:53.321 "zcopy": true, 00:13:53.321 "get_zone_info": false, 00:13:53.321 "zone_management": false, 00:13:53.321 "zone_append": false, 00:13:53.321 "compare": false, 00:13:53.321 "compare_and_write": false, 00:13:53.321 "abort": true, 00:13:53.321 "seek_hole": false, 00:13:53.321 "seek_data": false, 00:13:53.321 "copy": true, 00:13:53.321 "nvme_iov_md": false 00:13:53.321 }, 00:13:53.321 "memory_domains": [ 00:13:53.321 { 00:13:53.321 "dma_device_id": "system", 00:13:53.321 "dma_device_type": 1 00:13:53.321 }, 00:13:53.321 { 00:13:53.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.321 "dma_device_type": 2 00:13:53.321 } 00:13:53.321 ], 00:13:53.321 "driver_specific": {} 00:13:53.321 } 00:13:53.321 ] 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.321 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.600 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.600 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.600 "name": "Existed_Raid", 00:13:53.600 "uuid": "146d341a-6de8-44db-a284-5f2bd02f3871", 00:13:53.600 "strip_size_kb": 64, 00:13:53.600 "state": "online", 00:13:53.600 "raid_level": "raid5f", 00:13:53.600 "superblock": false, 00:13:53.600 "num_base_bdevs": 4, 00:13:53.600 "num_base_bdevs_discovered": 4, 00:13:53.600 "num_base_bdevs_operational": 4, 00:13:53.600 "base_bdevs_list": [ 00:13:53.600 { 00:13:53.600 "name": "NewBaseBdev", 00:13:53.600 "uuid": "8cc620b3-3dfb-457b-8241-1da1dde6864f", 00:13:53.600 "is_configured": true, 00:13:53.600 "data_offset": 0, 00:13:53.600 "data_size": 65536 00:13:53.600 }, 00:13:53.600 { 00:13:53.600 "name": "BaseBdev2", 00:13:53.600 "uuid": "8c0aa643-3a0d-4f32-9318-bf3a17da18fe", 00:13:53.600 "is_configured": true, 00:13:53.600 "data_offset": 0, 00:13:53.600 "data_size": 65536 00:13:53.600 }, 00:13:53.600 { 00:13:53.600 "name": "BaseBdev3", 00:13:53.600 "uuid": "5058d3e7-3887-4953-bbeb-bd3877b55367", 00:13:53.600 "is_configured": true, 00:13:53.600 "data_offset": 0, 00:13:53.600 "data_size": 65536 00:13:53.600 }, 00:13:53.600 { 00:13:53.600 "name": "BaseBdev4", 00:13:53.600 "uuid": "422141b4-02f1-4427-a160-11cfae8aa0ab", 00:13:53.600 "is_configured": true, 00:13:53.600 "data_offset": 0, 00:13:53.600 "data_size": 65536 00:13:53.600 } 00:13:53.600 ] 00:13:53.600 }' 00:13:53.600 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.600 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.862 [2024-11-02 23:53:47.773338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.862 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.862 "name": "Existed_Raid", 00:13:53.862 "aliases": [ 00:13:53.862 "146d341a-6de8-44db-a284-5f2bd02f3871" 00:13:53.862 ], 00:13:53.862 "product_name": "Raid Volume", 00:13:53.862 "block_size": 512, 00:13:53.862 "num_blocks": 196608, 00:13:53.862 "uuid": "146d341a-6de8-44db-a284-5f2bd02f3871", 00:13:53.862 "assigned_rate_limits": { 00:13:53.862 "rw_ios_per_sec": 0, 00:13:53.862 "rw_mbytes_per_sec": 0, 00:13:53.862 "r_mbytes_per_sec": 0, 00:13:53.862 "w_mbytes_per_sec": 0 00:13:53.862 }, 00:13:53.862 "claimed": false, 00:13:53.862 "zoned": false, 00:13:53.862 "supported_io_types": { 00:13:53.862 "read": true, 00:13:53.862 "write": true, 00:13:53.862 "unmap": false, 00:13:53.862 "flush": false, 00:13:53.862 "reset": true, 00:13:53.862 "nvme_admin": false, 00:13:53.862 "nvme_io": false, 00:13:53.862 "nvme_io_md": false, 00:13:53.862 "write_zeroes": true, 00:13:53.862 "zcopy": false, 00:13:53.862 "get_zone_info": false, 00:13:53.862 "zone_management": false, 00:13:53.862 "zone_append": false, 00:13:53.862 "compare": false, 00:13:53.862 "compare_and_write": false, 00:13:53.862 "abort": false, 00:13:53.862 "seek_hole": false, 00:13:53.862 "seek_data": false, 00:13:53.862 "copy": false, 00:13:53.862 "nvme_iov_md": false 00:13:53.862 }, 00:13:53.862 "driver_specific": { 00:13:53.862 "raid": { 00:13:53.862 "uuid": "146d341a-6de8-44db-a284-5f2bd02f3871", 00:13:53.862 "strip_size_kb": 64, 00:13:53.862 "state": "online", 00:13:53.862 "raid_level": "raid5f", 00:13:53.862 "superblock": false, 00:13:53.862 "num_base_bdevs": 4, 00:13:53.862 "num_base_bdevs_discovered": 4, 00:13:53.862 "num_base_bdevs_operational": 4, 00:13:53.862 "base_bdevs_list": [ 00:13:53.863 { 00:13:53.863 "name": "NewBaseBdev", 00:13:53.863 "uuid": "8cc620b3-3dfb-457b-8241-1da1dde6864f", 00:13:53.863 "is_configured": true, 00:13:53.863 "data_offset": 0, 00:13:53.863 "data_size": 65536 00:13:53.863 }, 00:13:53.863 { 00:13:53.863 "name": "BaseBdev2", 00:13:53.863 "uuid": "8c0aa643-3a0d-4f32-9318-bf3a17da18fe", 00:13:53.863 "is_configured": true, 00:13:53.863 "data_offset": 0, 00:13:53.863 "data_size": 65536 00:13:53.863 }, 00:13:53.863 { 00:13:53.863 "name": "BaseBdev3", 00:13:53.863 "uuid": "5058d3e7-3887-4953-bbeb-bd3877b55367", 00:13:53.863 "is_configured": true, 00:13:53.863 "data_offset": 0, 00:13:53.863 "data_size": 65536 00:13:53.863 }, 00:13:53.863 { 00:13:53.863 "name": "BaseBdev4", 00:13:53.863 "uuid": "422141b4-02f1-4427-a160-11cfae8aa0ab", 00:13:53.863 "is_configured": true, 00:13:53.863 "data_offset": 0, 00:13:53.863 "data_size": 65536 00:13:53.863 } 00:13:53.863 ] 00:13:53.863 } 00:13:53.863 } 00:13:53.863 }' 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:53.863 BaseBdev2 00:13:53.863 BaseBdev3 00:13:53.863 BaseBdev4' 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.863 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.123 23:53:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.123 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.123 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.123 23:53:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.123 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.124 [2024-11-02 23:53:48.092621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.124 [2024-11-02 23:53:48.092650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.124 [2024-11-02 23:53:48.092717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.124 [2024-11-02 23:53:48.092979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.124 [2024-11-02 23:53:48.092996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93043 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 93043 ']' 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 93043 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 93043 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 93043' 00:13:54.124 killing process with pid 93043 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 93043 00:13:54.124 [2024-11-02 23:53:48.143854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.124 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 93043 00:13:54.124 [2024-11-02 23:53:48.184510] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.384 23:53:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:54.384 00:13:54.384 real 0m9.419s 00:13:54.384 user 0m16.069s 00:13:54.384 sys 0m2.039s 00:13:54.384 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:54.384 23:53:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.384 ************************************ 00:13:54.384 END TEST raid5f_state_function_test 00:13:54.384 ************************************ 00:13:54.384 23:53:48 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:13:54.384 23:53:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:54.384 23:53:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:54.384 23:53:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.384 ************************************ 00:13:54.384 START TEST raid5f_state_function_test_sb 00:13:54.384 ************************************ 00:13:54.384 23:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93689 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:54.645 Process raid pid: 93689 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93689' 00:13:54.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93689 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 93689 ']' 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:54.645 23:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.645 [2024-11-02 23:53:48.572204] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:13:54.645 [2024-11-02 23:53:48.572345] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.645 [2024-11-02 23:53:48.726047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.906 [2024-11-02 23:53:48.751736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.906 [2024-11-02 23:53:48.794514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.906 [2024-11-02 23:53:48.794629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.475 [2024-11-02 23:53:49.411827] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.475 [2024-11-02 23:53:49.411920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.475 [2024-11-02 23:53:49.411951] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.475 [2024-11-02 23:53:49.411976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.475 [2024-11-02 23:53:49.412012] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.475 [2024-11-02 23:53:49.412036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.475 [2024-11-02 23:53:49.412067] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:55.475 [2024-11-02 23:53:49.412088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.475 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.476 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.476 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.476 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.476 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.476 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.476 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.476 "name": "Existed_Raid", 00:13:55.476 "uuid": "5aa70261-6cec-42ed-a038-c13b30881171", 00:13:55.476 "strip_size_kb": 64, 00:13:55.476 "state": "configuring", 00:13:55.476 "raid_level": "raid5f", 00:13:55.476 "superblock": true, 00:13:55.476 "num_base_bdevs": 4, 00:13:55.476 "num_base_bdevs_discovered": 0, 00:13:55.476 "num_base_bdevs_operational": 4, 00:13:55.476 "base_bdevs_list": [ 00:13:55.476 { 00:13:55.476 "name": "BaseBdev1", 00:13:55.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.476 "is_configured": false, 00:13:55.476 "data_offset": 0, 00:13:55.476 "data_size": 0 00:13:55.476 }, 00:13:55.476 { 00:13:55.476 "name": "BaseBdev2", 00:13:55.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.476 "is_configured": false, 00:13:55.476 "data_offset": 0, 00:13:55.476 "data_size": 0 00:13:55.476 }, 00:13:55.476 { 00:13:55.476 "name": "BaseBdev3", 00:13:55.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.476 "is_configured": false, 00:13:55.476 "data_offset": 0, 00:13:55.476 "data_size": 0 00:13:55.476 }, 00:13:55.476 { 00:13:55.476 "name": "BaseBdev4", 00:13:55.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.476 "is_configured": false, 00:13:55.476 "data_offset": 0, 00:13:55.476 "data_size": 0 00:13:55.476 } 00:13:55.476 ] 00:13:55.476 }' 00:13:55.476 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.476 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.735 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:55.735 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.735 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.735 [2024-11-02 23:53:49.823036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.735 [2024-11-02 23:53:49.823118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:55.735 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.735 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.003 [2024-11-02 23:53:49.835048] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.003 [2024-11-02 23:53:49.835089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.003 [2024-11-02 23:53:49.835099] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.003 [2024-11-02 23:53:49.835108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.003 [2024-11-02 23:53:49.835115] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:56.003 [2024-11-02 23:53:49.835123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:56.003 [2024-11-02 23:53:49.835129] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:56.003 [2024-11-02 23:53:49.835138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.003 [2024-11-02 23:53:49.855700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.003 BaseBdev1 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.003 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.003 [ 00:13:56.003 { 00:13:56.003 "name": "BaseBdev1", 00:13:56.003 "aliases": [ 00:13:56.003 "4ab9e6a3-0e9b-4890-a921-85047559544a" 00:13:56.003 ], 00:13:56.003 "product_name": "Malloc disk", 00:13:56.003 "block_size": 512, 00:13:56.003 "num_blocks": 65536, 00:13:56.003 "uuid": "4ab9e6a3-0e9b-4890-a921-85047559544a", 00:13:56.003 "assigned_rate_limits": { 00:13:56.003 "rw_ios_per_sec": 0, 00:13:56.003 "rw_mbytes_per_sec": 0, 00:13:56.003 "r_mbytes_per_sec": 0, 00:13:56.003 "w_mbytes_per_sec": 0 00:13:56.003 }, 00:13:56.003 "claimed": true, 00:13:56.003 "claim_type": "exclusive_write", 00:13:56.003 "zoned": false, 00:13:56.003 "supported_io_types": { 00:13:56.003 "read": true, 00:13:56.003 "write": true, 00:13:56.003 "unmap": true, 00:13:56.003 "flush": true, 00:13:56.003 "reset": true, 00:13:56.003 "nvme_admin": false, 00:13:56.003 "nvme_io": false, 00:13:56.003 "nvme_io_md": false, 00:13:56.003 "write_zeroes": true, 00:13:56.003 "zcopy": true, 00:13:56.003 "get_zone_info": false, 00:13:56.003 "zone_management": false, 00:13:56.003 "zone_append": false, 00:13:56.003 "compare": false, 00:13:56.003 "compare_and_write": false, 00:13:56.003 "abort": true, 00:13:56.003 "seek_hole": false, 00:13:56.003 "seek_data": false, 00:13:56.003 "copy": true, 00:13:56.003 "nvme_iov_md": false 00:13:56.004 }, 00:13:56.004 "memory_domains": [ 00:13:56.004 { 00:13:56.004 "dma_device_id": "system", 00:13:56.004 "dma_device_type": 1 00:13:56.004 }, 00:13:56.004 { 00:13:56.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.004 "dma_device_type": 2 00:13:56.004 } 00:13:56.004 ], 00:13:56.004 "driver_specific": {} 00:13:56.004 } 00:13:56.004 ] 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.004 "name": "Existed_Raid", 00:13:56.004 "uuid": "a42c7d1f-4aed-4d37-812b-651573c38dc6", 00:13:56.004 "strip_size_kb": 64, 00:13:56.004 "state": "configuring", 00:13:56.004 "raid_level": "raid5f", 00:13:56.004 "superblock": true, 00:13:56.004 "num_base_bdevs": 4, 00:13:56.004 "num_base_bdevs_discovered": 1, 00:13:56.004 "num_base_bdevs_operational": 4, 00:13:56.004 "base_bdevs_list": [ 00:13:56.004 { 00:13:56.004 "name": "BaseBdev1", 00:13:56.004 "uuid": "4ab9e6a3-0e9b-4890-a921-85047559544a", 00:13:56.004 "is_configured": true, 00:13:56.004 "data_offset": 2048, 00:13:56.004 "data_size": 63488 00:13:56.004 }, 00:13:56.004 { 00:13:56.004 "name": "BaseBdev2", 00:13:56.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.004 "is_configured": false, 00:13:56.004 "data_offset": 0, 00:13:56.004 "data_size": 0 00:13:56.004 }, 00:13:56.004 { 00:13:56.004 "name": "BaseBdev3", 00:13:56.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.004 "is_configured": false, 00:13:56.004 "data_offset": 0, 00:13:56.004 "data_size": 0 00:13:56.004 }, 00:13:56.004 { 00:13:56.004 "name": "BaseBdev4", 00:13:56.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.004 "is_configured": false, 00:13:56.004 "data_offset": 0, 00:13:56.004 "data_size": 0 00:13:56.004 } 00:13:56.004 ] 00:13:56.004 }' 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.004 23:53:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.274 [2024-11-02 23:53:50.291005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.274 [2024-11-02 23:53:50.291114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.274 [2024-11-02 23:53:50.303018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.274 [2024-11-02 23:53:50.304942] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.274 [2024-11-02 23:53:50.305023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.274 [2024-11-02 23:53:50.305050] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:56.274 [2024-11-02 23:53:50.305071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:56.274 [2024-11-02 23:53:50.305089] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:56.274 [2024-11-02 23:53:50.305108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.274 "name": "Existed_Raid", 00:13:56.274 "uuid": "ae1c5ab2-c6e9-4064-9e39-a64cdc1e9fd4", 00:13:56.274 "strip_size_kb": 64, 00:13:56.274 "state": "configuring", 00:13:56.274 "raid_level": "raid5f", 00:13:56.274 "superblock": true, 00:13:56.274 "num_base_bdevs": 4, 00:13:56.274 "num_base_bdevs_discovered": 1, 00:13:56.274 "num_base_bdevs_operational": 4, 00:13:56.274 "base_bdevs_list": [ 00:13:56.274 { 00:13:56.274 "name": "BaseBdev1", 00:13:56.274 "uuid": "4ab9e6a3-0e9b-4890-a921-85047559544a", 00:13:56.274 "is_configured": true, 00:13:56.274 "data_offset": 2048, 00:13:56.274 "data_size": 63488 00:13:56.274 }, 00:13:56.274 { 00:13:56.274 "name": "BaseBdev2", 00:13:56.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.274 "is_configured": false, 00:13:56.274 "data_offset": 0, 00:13:56.274 "data_size": 0 00:13:56.274 }, 00:13:56.274 { 00:13:56.274 "name": "BaseBdev3", 00:13:56.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.274 "is_configured": false, 00:13:56.274 "data_offset": 0, 00:13:56.274 "data_size": 0 00:13:56.274 }, 00:13:56.274 { 00:13:56.274 "name": "BaseBdev4", 00:13:56.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.274 "is_configured": false, 00:13:56.274 "data_offset": 0, 00:13:56.274 "data_size": 0 00:13:56.274 } 00:13:56.274 ] 00:13:56.274 }' 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.274 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.846 [2024-11-02 23:53:50.713826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.846 BaseBdev2 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.846 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.846 [ 00:13:56.846 { 00:13:56.846 "name": "BaseBdev2", 00:13:56.846 "aliases": [ 00:13:56.846 "7d81ec72-05bb-4e7f-a512-749432699c9b" 00:13:56.846 ], 00:13:56.846 "product_name": "Malloc disk", 00:13:56.846 "block_size": 512, 00:13:56.846 "num_blocks": 65536, 00:13:56.846 "uuid": "7d81ec72-05bb-4e7f-a512-749432699c9b", 00:13:56.846 "assigned_rate_limits": { 00:13:56.846 "rw_ios_per_sec": 0, 00:13:56.846 "rw_mbytes_per_sec": 0, 00:13:56.846 "r_mbytes_per_sec": 0, 00:13:56.847 "w_mbytes_per_sec": 0 00:13:56.847 }, 00:13:56.847 "claimed": true, 00:13:56.847 "claim_type": "exclusive_write", 00:13:56.847 "zoned": false, 00:13:56.847 "supported_io_types": { 00:13:56.847 "read": true, 00:13:56.847 "write": true, 00:13:56.847 "unmap": true, 00:13:56.847 "flush": true, 00:13:56.847 "reset": true, 00:13:56.847 "nvme_admin": false, 00:13:56.847 "nvme_io": false, 00:13:56.847 "nvme_io_md": false, 00:13:56.847 "write_zeroes": true, 00:13:56.847 "zcopy": true, 00:13:56.847 "get_zone_info": false, 00:13:56.847 "zone_management": false, 00:13:56.847 "zone_append": false, 00:13:56.847 "compare": false, 00:13:56.847 "compare_and_write": false, 00:13:56.847 "abort": true, 00:13:56.847 "seek_hole": false, 00:13:56.847 "seek_data": false, 00:13:56.847 "copy": true, 00:13:56.847 "nvme_iov_md": false 00:13:56.847 }, 00:13:56.847 "memory_domains": [ 00:13:56.847 { 00:13:56.847 "dma_device_id": "system", 00:13:56.847 "dma_device_type": 1 00:13:56.847 }, 00:13:56.847 { 00:13:56.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.847 "dma_device_type": 2 00:13:56.847 } 00:13:56.847 ], 00:13:56.847 "driver_specific": {} 00:13:56.847 } 00:13:56.847 ] 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.847 "name": "Existed_Raid", 00:13:56.847 "uuid": "ae1c5ab2-c6e9-4064-9e39-a64cdc1e9fd4", 00:13:56.847 "strip_size_kb": 64, 00:13:56.847 "state": "configuring", 00:13:56.847 "raid_level": "raid5f", 00:13:56.847 "superblock": true, 00:13:56.847 "num_base_bdevs": 4, 00:13:56.847 "num_base_bdevs_discovered": 2, 00:13:56.847 "num_base_bdevs_operational": 4, 00:13:56.847 "base_bdevs_list": [ 00:13:56.847 { 00:13:56.847 "name": "BaseBdev1", 00:13:56.847 "uuid": "4ab9e6a3-0e9b-4890-a921-85047559544a", 00:13:56.847 "is_configured": true, 00:13:56.847 "data_offset": 2048, 00:13:56.847 "data_size": 63488 00:13:56.847 }, 00:13:56.847 { 00:13:56.847 "name": "BaseBdev2", 00:13:56.847 "uuid": "7d81ec72-05bb-4e7f-a512-749432699c9b", 00:13:56.847 "is_configured": true, 00:13:56.847 "data_offset": 2048, 00:13:56.847 "data_size": 63488 00:13:56.847 }, 00:13:56.847 { 00:13:56.847 "name": "BaseBdev3", 00:13:56.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.847 "is_configured": false, 00:13:56.847 "data_offset": 0, 00:13:56.847 "data_size": 0 00:13:56.847 }, 00:13:56.847 { 00:13:56.847 "name": "BaseBdev4", 00:13:56.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.847 "is_configured": false, 00:13:56.847 "data_offset": 0, 00:13:56.847 "data_size": 0 00:13:56.847 } 00:13:56.847 ] 00:13:56.847 }' 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.847 23:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.106 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:57.106 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.106 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.106 [2024-11-02 23:53:51.164212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.106 BaseBdev3 00:13:57.106 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.106 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.107 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.107 [ 00:13:57.107 { 00:13:57.107 "name": "BaseBdev3", 00:13:57.107 "aliases": [ 00:13:57.107 "f21a7642-b49e-4004-a91c-a0b552ea697b" 00:13:57.107 ], 00:13:57.107 "product_name": "Malloc disk", 00:13:57.107 "block_size": 512, 00:13:57.107 "num_blocks": 65536, 00:13:57.107 "uuid": "f21a7642-b49e-4004-a91c-a0b552ea697b", 00:13:57.107 "assigned_rate_limits": { 00:13:57.107 "rw_ios_per_sec": 0, 00:13:57.107 "rw_mbytes_per_sec": 0, 00:13:57.107 "r_mbytes_per_sec": 0, 00:13:57.107 "w_mbytes_per_sec": 0 00:13:57.107 }, 00:13:57.107 "claimed": true, 00:13:57.107 "claim_type": "exclusive_write", 00:13:57.107 "zoned": false, 00:13:57.107 "supported_io_types": { 00:13:57.107 "read": true, 00:13:57.107 "write": true, 00:13:57.107 "unmap": true, 00:13:57.107 "flush": true, 00:13:57.107 "reset": true, 00:13:57.107 "nvme_admin": false, 00:13:57.107 "nvme_io": false, 00:13:57.107 "nvme_io_md": false, 00:13:57.107 "write_zeroes": true, 00:13:57.107 "zcopy": true, 00:13:57.107 "get_zone_info": false, 00:13:57.107 "zone_management": false, 00:13:57.107 "zone_append": false, 00:13:57.107 "compare": false, 00:13:57.107 "compare_and_write": false, 00:13:57.107 "abort": true, 00:13:57.107 "seek_hole": false, 00:13:57.366 "seek_data": false, 00:13:57.366 "copy": true, 00:13:57.366 "nvme_iov_md": false 00:13:57.366 }, 00:13:57.366 "memory_domains": [ 00:13:57.366 { 00:13:57.366 "dma_device_id": "system", 00:13:57.366 "dma_device_type": 1 00:13:57.366 }, 00:13:57.366 { 00:13:57.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.366 "dma_device_type": 2 00:13:57.366 } 00:13:57.366 ], 00:13:57.366 "driver_specific": {} 00:13:57.366 } 00:13:57.366 ] 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.366 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.366 "name": "Existed_Raid", 00:13:57.367 "uuid": "ae1c5ab2-c6e9-4064-9e39-a64cdc1e9fd4", 00:13:57.367 "strip_size_kb": 64, 00:13:57.367 "state": "configuring", 00:13:57.367 "raid_level": "raid5f", 00:13:57.367 "superblock": true, 00:13:57.367 "num_base_bdevs": 4, 00:13:57.367 "num_base_bdevs_discovered": 3, 00:13:57.367 "num_base_bdevs_operational": 4, 00:13:57.367 "base_bdevs_list": [ 00:13:57.367 { 00:13:57.367 "name": "BaseBdev1", 00:13:57.367 "uuid": "4ab9e6a3-0e9b-4890-a921-85047559544a", 00:13:57.367 "is_configured": true, 00:13:57.367 "data_offset": 2048, 00:13:57.367 "data_size": 63488 00:13:57.367 }, 00:13:57.367 { 00:13:57.367 "name": "BaseBdev2", 00:13:57.367 "uuid": "7d81ec72-05bb-4e7f-a512-749432699c9b", 00:13:57.367 "is_configured": true, 00:13:57.367 "data_offset": 2048, 00:13:57.367 "data_size": 63488 00:13:57.367 }, 00:13:57.367 { 00:13:57.367 "name": "BaseBdev3", 00:13:57.367 "uuid": "f21a7642-b49e-4004-a91c-a0b552ea697b", 00:13:57.367 "is_configured": true, 00:13:57.367 "data_offset": 2048, 00:13:57.367 "data_size": 63488 00:13:57.367 }, 00:13:57.367 { 00:13:57.367 "name": "BaseBdev4", 00:13:57.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.367 "is_configured": false, 00:13:57.367 "data_offset": 0, 00:13:57.367 "data_size": 0 00:13:57.367 } 00:13:57.367 ] 00:13:57.367 }' 00:13:57.367 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.367 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.626 [2024-11-02 23:53:51.666509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.626 [2024-11-02 23:53:51.666815] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:57.626 [2024-11-02 23:53:51.666869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:57.626 BaseBdev4 00:13:57.626 [2024-11-02 23:53:51.667165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:57.626 [2024-11-02 23:53:51.667626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:57.626 [2024-11-02 23:53:51.667681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:57.626 [2024-11-02 23:53:51.667854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:57.626 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.627 [ 00:13:57.627 { 00:13:57.627 "name": "BaseBdev4", 00:13:57.627 "aliases": [ 00:13:57.627 "4f15d002-6e56-4156-954a-bc4b29f73fdb" 00:13:57.627 ], 00:13:57.627 "product_name": "Malloc disk", 00:13:57.627 "block_size": 512, 00:13:57.627 "num_blocks": 65536, 00:13:57.627 "uuid": "4f15d002-6e56-4156-954a-bc4b29f73fdb", 00:13:57.627 "assigned_rate_limits": { 00:13:57.627 "rw_ios_per_sec": 0, 00:13:57.627 "rw_mbytes_per_sec": 0, 00:13:57.627 "r_mbytes_per_sec": 0, 00:13:57.627 "w_mbytes_per_sec": 0 00:13:57.627 }, 00:13:57.627 "claimed": true, 00:13:57.627 "claim_type": "exclusive_write", 00:13:57.627 "zoned": false, 00:13:57.627 "supported_io_types": { 00:13:57.627 "read": true, 00:13:57.627 "write": true, 00:13:57.627 "unmap": true, 00:13:57.627 "flush": true, 00:13:57.627 "reset": true, 00:13:57.627 "nvme_admin": false, 00:13:57.627 "nvme_io": false, 00:13:57.627 "nvme_io_md": false, 00:13:57.627 "write_zeroes": true, 00:13:57.627 "zcopy": true, 00:13:57.627 "get_zone_info": false, 00:13:57.627 "zone_management": false, 00:13:57.627 "zone_append": false, 00:13:57.627 "compare": false, 00:13:57.627 "compare_and_write": false, 00:13:57.627 "abort": true, 00:13:57.627 "seek_hole": false, 00:13:57.627 "seek_data": false, 00:13:57.627 "copy": true, 00:13:57.627 "nvme_iov_md": false 00:13:57.627 }, 00:13:57.627 "memory_domains": [ 00:13:57.627 { 00:13:57.627 "dma_device_id": "system", 00:13:57.627 "dma_device_type": 1 00:13:57.627 }, 00:13:57.627 { 00:13:57.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.627 "dma_device_type": 2 00:13:57.627 } 00:13:57.627 ], 00:13:57.627 "driver_specific": {} 00:13:57.627 } 00:13:57.627 ] 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.627 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.887 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.887 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.887 "name": "Existed_Raid", 00:13:57.887 "uuid": "ae1c5ab2-c6e9-4064-9e39-a64cdc1e9fd4", 00:13:57.887 "strip_size_kb": 64, 00:13:57.887 "state": "online", 00:13:57.887 "raid_level": "raid5f", 00:13:57.887 "superblock": true, 00:13:57.887 "num_base_bdevs": 4, 00:13:57.887 "num_base_bdevs_discovered": 4, 00:13:57.887 "num_base_bdevs_operational": 4, 00:13:57.887 "base_bdevs_list": [ 00:13:57.887 { 00:13:57.887 "name": "BaseBdev1", 00:13:57.887 "uuid": "4ab9e6a3-0e9b-4890-a921-85047559544a", 00:13:57.887 "is_configured": true, 00:13:57.887 "data_offset": 2048, 00:13:57.887 "data_size": 63488 00:13:57.887 }, 00:13:57.887 { 00:13:57.887 "name": "BaseBdev2", 00:13:57.887 "uuid": "7d81ec72-05bb-4e7f-a512-749432699c9b", 00:13:57.887 "is_configured": true, 00:13:57.887 "data_offset": 2048, 00:13:57.887 "data_size": 63488 00:13:57.887 }, 00:13:57.887 { 00:13:57.887 "name": "BaseBdev3", 00:13:57.887 "uuid": "f21a7642-b49e-4004-a91c-a0b552ea697b", 00:13:57.887 "is_configured": true, 00:13:57.887 "data_offset": 2048, 00:13:57.887 "data_size": 63488 00:13:57.887 }, 00:13:57.887 { 00:13:57.887 "name": "BaseBdev4", 00:13:57.887 "uuid": "4f15d002-6e56-4156-954a-bc4b29f73fdb", 00:13:57.887 "is_configured": true, 00:13:57.887 "data_offset": 2048, 00:13:57.887 "data_size": 63488 00:13:57.887 } 00:13:57.887 ] 00:13:57.887 }' 00:13:57.887 23:53:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.887 23:53:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.147 [2024-11-02 23:53:52.173901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:58.147 "name": "Existed_Raid", 00:13:58.147 "aliases": [ 00:13:58.147 "ae1c5ab2-c6e9-4064-9e39-a64cdc1e9fd4" 00:13:58.147 ], 00:13:58.147 "product_name": "Raid Volume", 00:13:58.147 "block_size": 512, 00:13:58.147 "num_blocks": 190464, 00:13:58.147 "uuid": "ae1c5ab2-c6e9-4064-9e39-a64cdc1e9fd4", 00:13:58.147 "assigned_rate_limits": { 00:13:58.147 "rw_ios_per_sec": 0, 00:13:58.147 "rw_mbytes_per_sec": 0, 00:13:58.147 "r_mbytes_per_sec": 0, 00:13:58.147 "w_mbytes_per_sec": 0 00:13:58.147 }, 00:13:58.147 "claimed": false, 00:13:58.147 "zoned": false, 00:13:58.147 "supported_io_types": { 00:13:58.147 "read": true, 00:13:58.147 "write": true, 00:13:58.147 "unmap": false, 00:13:58.147 "flush": false, 00:13:58.147 "reset": true, 00:13:58.147 "nvme_admin": false, 00:13:58.147 "nvme_io": false, 00:13:58.147 "nvme_io_md": false, 00:13:58.147 "write_zeroes": true, 00:13:58.147 "zcopy": false, 00:13:58.147 "get_zone_info": false, 00:13:58.147 "zone_management": false, 00:13:58.147 "zone_append": false, 00:13:58.147 "compare": false, 00:13:58.147 "compare_and_write": false, 00:13:58.147 "abort": false, 00:13:58.147 "seek_hole": false, 00:13:58.147 "seek_data": false, 00:13:58.147 "copy": false, 00:13:58.147 "nvme_iov_md": false 00:13:58.147 }, 00:13:58.147 "driver_specific": { 00:13:58.147 "raid": { 00:13:58.147 "uuid": "ae1c5ab2-c6e9-4064-9e39-a64cdc1e9fd4", 00:13:58.147 "strip_size_kb": 64, 00:13:58.147 "state": "online", 00:13:58.147 "raid_level": "raid5f", 00:13:58.147 "superblock": true, 00:13:58.147 "num_base_bdevs": 4, 00:13:58.147 "num_base_bdevs_discovered": 4, 00:13:58.147 "num_base_bdevs_operational": 4, 00:13:58.147 "base_bdevs_list": [ 00:13:58.147 { 00:13:58.147 "name": "BaseBdev1", 00:13:58.147 "uuid": "4ab9e6a3-0e9b-4890-a921-85047559544a", 00:13:58.147 "is_configured": true, 00:13:58.147 "data_offset": 2048, 00:13:58.147 "data_size": 63488 00:13:58.147 }, 00:13:58.147 { 00:13:58.147 "name": "BaseBdev2", 00:13:58.147 "uuid": "7d81ec72-05bb-4e7f-a512-749432699c9b", 00:13:58.147 "is_configured": true, 00:13:58.147 "data_offset": 2048, 00:13:58.147 "data_size": 63488 00:13:58.147 }, 00:13:58.147 { 00:13:58.147 "name": "BaseBdev3", 00:13:58.147 "uuid": "f21a7642-b49e-4004-a91c-a0b552ea697b", 00:13:58.147 "is_configured": true, 00:13:58.147 "data_offset": 2048, 00:13:58.147 "data_size": 63488 00:13:58.147 }, 00:13:58.147 { 00:13:58.147 "name": "BaseBdev4", 00:13:58.147 "uuid": "4f15d002-6e56-4156-954a-bc4b29f73fdb", 00:13:58.147 "is_configured": true, 00:13:58.147 "data_offset": 2048, 00:13:58.147 "data_size": 63488 00:13:58.147 } 00:13:58.147 ] 00:13:58.147 } 00:13:58.147 } 00:13:58.147 }' 00:13:58.147 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:58.408 BaseBdev2 00:13:58.408 BaseBdev3 00:13:58.408 BaseBdev4' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.408 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.668 [2024-11-02 23:53:52.521115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.668 "name": "Existed_Raid", 00:13:58.668 "uuid": "ae1c5ab2-c6e9-4064-9e39-a64cdc1e9fd4", 00:13:58.668 "strip_size_kb": 64, 00:13:58.668 "state": "online", 00:13:58.668 "raid_level": "raid5f", 00:13:58.668 "superblock": true, 00:13:58.668 "num_base_bdevs": 4, 00:13:58.668 "num_base_bdevs_discovered": 3, 00:13:58.668 "num_base_bdevs_operational": 3, 00:13:58.668 "base_bdevs_list": [ 00:13:58.668 { 00:13:58.668 "name": null, 00:13:58.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.668 "is_configured": false, 00:13:58.668 "data_offset": 0, 00:13:58.668 "data_size": 63488 00:13:58.668 }, 00:13:58.668 { 00:13:58.668 "name": "BaseBdev2", 00:13:58.668 "uuid": "7d81ec72-05bb-4e7f-a512-749432699c9b", 00:13:58.668 "is_configured": true, 00:13:58.668 "data_offset": 2048, 00:13:58.668 "data_size": 63488 00:13:58.668 }, 00:13:58.668 { 00:13:58.668 "name": "BaseBdev3", 00:13:58.668 "uuid": "f21a7642-b49e-4004-a91c-a0b552ea697b", 00:13:58.668 "is_configured": true, 00:13:58.668 "data_offset": 2048, 00:13:58.668 "data_size": 63488 00:13:58.668 }, 00:13:58.668 { 00:13:58.668 "name": "BaseBdev4", 00:13:58.668 "uuid": "4f15d002-6e56-4156-954a-bc4b29f73fdb", 00:13:58.668 "is_configured": true, 00:13:58.668 "data_offset": 2048, 00:13:58.668 "data_size": 63488 00:13:58.668 } 00:13:58.668 ] 00:13:58.668 }' 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.668 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.941 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:58.941 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.941 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.941 23:53:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:58.941 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.941 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.941 23:53:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.941 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:58.941 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:58.941 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:58.941 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.941 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 [2024-11-02 23:53:53.031656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:59.202 [2024-11-02 23:53:53.031821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.202 [2024-11-02 23:53:53.043217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 [2024-11-02 23:53:53.099158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 [2024-11-02 23:53:53.158264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:59.202 [2024-11-02 23:53:53.158350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 BaseBdev2 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.202 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 [ 00:13:59.202 { 00:13:59.202 "name": "BaseBdev2", 00:13:59.202 "aliases": [ 00:13:59.202 "e7e8e43a-716c-4e01-87ec-4703e9689559" 00:13:59.202 ], 00:13:59.202 "product_name": "Malloc disk", 00:13:59.202 "block_size": 512, 00:13:59.202 "num_blocks": 65536, 00:13:59.202 "uuid": "e7e8e43a-716c-4e01-87ec-4703e9689559", 00:13:59.202 "assigned_rate_limits": { 00:13:59.202 "rw_ios_per_sec": 0, 00:13:59.202 "rw_mbytes_per_sec": 0, 00:13:59.202 "r_mbytes_per_sec": 0, 00:13:59.202 "w_mbytes_per_sec": 0 00:13:59.202 }, 00:13:59.202 "claimed": false, 00:13:59.202 "zoned": false, 00:13:59.202 "supported_io_types": { 00:13:59.202 "read": true, 00:13:59.202 "write": true, 00:13:59.202 "unmap": true, 00:13:59.202 "flush": true, 00:13:59.202 "reset": true, 00:13:59.202 "nvme_admin": false, 00:13:59.202 "nvme_io": false, 00:13:59.202 "nvme_io_md": false, 00:13:59.202 "write_zeroes": true, 00:13:59.202 "zcopy": true, 00:13:59.202 "get_zone_info": false, 00:13:59.202 "zone_management": false, 00:13:59.202 "zone_append": false, 00:13:59.202 "compare": false, 00:13:59.202 "compare_and_write": false, 00:13:59.202 "abort": true, 00:13:59.202 "seek_hole": false, 00:13:59.202 "seek_data": false, 00:13:59.202 "copy": true, 00:13:59.202 "nvme_iov_md": false 00:13:59.202 }, 00:13:59.202 "memory_domains": [ 00:13:59.202 { 00:13:59.202 "dma_device_id": "system", 00:13:59.202 "dma_device_type": 1 00:13:59.202 }, 00:13:59.202 { 00:13:59.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.202 "dma_device_type": 2 00:13:59.202 } 00:13:59.202 ], 00:13:59.202 "driver_specific": {} 00:13:59.203 } 00:13:59.203 ] 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.203 BaseBdev3 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.203 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.463 [ 00:13:59.463 { 00:13:59.463 "name": "BaseBdev3", 00:13:59.463 "aliases": [ 00:13:59.463 "7c12fd06-58f7-49b9-856e-81e3e5048aa7" 00:13:59.463 ], 00:13:59.463 "product_name": "Malloc disk", 00:13:59.463 "block_size": 512, 00:13:59.463 "num_blocks": 65536, 00:13:59.463 "uuid": "7c12fd06-58f7-49b9-856e-81e3e5048aa7", 00:13:59.463 "assigned_rate_limits": { 00:13:59.463 "rw_ios_per_sec": 0, 00:13:59.463 "rw_mbytes_per_sec": 0, 00:13:59.463 "r_mbytes_per_sec": 0, 00:13:59.463 "w_mbytes_per_sec": 0 00:13:59.463 }, 00:13:59.463 "claimed": false, 00:13:59.463 "zoned": false, 00:13:59.463 "supported_io_types": { 00:13:59.463 "read": true, 00:13:59.463 "write": true, 00:13:59.463 "unmap": true, 00:13:59.463 "flush": true, 00:13:59.463 "reset": true, 00:13:59.463 "nvme_admin": false, 00:13:59.463 "nvme_io": false, 00:13:59.463 "nvme_io_md": false, 00:13:59.463 "write_zeroes": true, 00:13:59.463 "zcopy": true, 00:13:59.463 "get_zone_info": false, 00:13:59.463 "zone_management": false, 00:13:59.463 "zone_append": false, 00:13:59.463 "compare": false, 00:13:59.463 "compare_and_write": false, 00:13:59.463 "abort": true, 00:13:59.463 "seek_hole": false, 00:13:59.463 "seek_data": false, 00:13:59.463 "copy": true, 00:13:59.463 "nvme_iov_md": false 00:13:59.463 }, 00:13:59.463 "memory_domains": [ 00:13:59.463 { 00:13:59.463 "dma_device_id": "system", 00:13:59.463 "dma_device_type": 1 00:13:59.463 }, 00:13:59.463 { 00:13:59.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.463 "dma_device_type": 2 00:13:59.463 } 00:13:59.463 ], 00:13:59.463 "driver_specific": {} 00:13:59.463 } 00:13:59.463 ] 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.463 BaseBdev4 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.463 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.463 [ 00:13:59.463 { 00:13:59.463 "name": "BaseBdev4", 00:13:59.463 "aliases": [ 00:13:59.463 "c1cc2024-0710-4726-9edb-09744da63fad" 00:13:59.463 ], 00:13:59.463 "product_name": "Malloc disk", 00:13:59.463 "block_size": 512, 00:13:59.463 "num_blocks": 65536, 00:13:59.464 "uuid": "c1cc2024-0710-4726-9edb-09744da63fad", 00:13:59.464 "assigned_rate_limits": { 00:13:59.464 "rw_ios_per_sec": 0, 00:13:59.464 "rw_mbytes_per_sec": 0, 00:13:59.464 "r_mbytes_per_sec": 0, 00:13:59.464 "w_mbytes_per_sec": 0 00:13:59.464 }, 00:13:59.464 "claimed": false, 00:13:59.464 "zoned": false, 00:13:59.464 "supported_io_types": { 00:13:59.464 "read": true, 00:13:59.464 "write": true, 00:13:59.464 "unmap": true, 00:13:59.464 "flush": true, 00:13:59.464 "reset": true, 00:13:59.464 "nvme_admin": false, 00:13:59.464 "nvme_io": false, 00:13:59.464 "nvme_io_md": false, 00:13:59.464 "write_zeroes": true, 00:13:59.464 "zcopy": true, 00:13:59.464 "get_zone_info": false, 00:13:59.464 "zone_management": false, 00:13:59.464 "zone_append": false, 00:13:59.464 "compare": false, 00:13:59.464 "compare_and_write": false, 00:13:59.464 "abort": true, 00:13:59.464 "seek_hole": false, 00:13:59.464 "seek_data": false, 00:13:59.464 "copy": true, 00:13:59.464 "nvme_iov_md": false 00:13:59.464 }, 00:13:59.464 "memory_domains": [ 00:13:59.464 { 00:13:59.464 "dma_device_id": "system", 00:13:59.464 "dma_device_type": 1 00:13:59.464 }, 00:13:59.464 { 00:13:59.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.464 "dma_device_type": 2 00:13:59.464 } 00:13:59.464 ], 00:13:59.464 "driver_specific": {} 00:13:59.464 } 00:13:59.464 ] 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.464 [2024-11-02 23:53:53.387182] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.464 [2024-11-02 23:53:53.387267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.464 [2024-11-02 23:53:53.387316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.464 [2024-11-02 23:53:53.389220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.464 [2024-11-02 23:53:53.389304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.464 "name": "Existed_Raid", 00:13:59.464 "uuid": "718aba95-bd97-47eb-a228-7f740f9144a4", 00:13:59.464 "strip_size_kb": 64, 00:13:59.464 "state": "configuring", 00:13:59.464 "raid_level": "raid5f", 00:13:59.464 "superblock": true, 00:13:59.464 "num_base_bdevs": 4, 00:13:59.464 "num_base_bdevs_discovered": 3, 00:13:59.464 "num_base_bdevs_operational": 4, 00:13:59.464 "base_bdevs_list": [ 00:13:59.464 { 00:13:59.464 "name": "BaseBdev1", 00:13:59.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.464 "is_configured": false, 00:13:59.464 "data_offset": 0, 00:13:59.464 "data_size": 0 00:13:59.464 }, 00:13:59.464 { 00:13:59.464 "name": "BaseBdev2", 00:13:59.464 "uuid": "e7e8e43a-716c-4e01-87ec-4703e9689559", 00:13:59.464 "is_configured": true, 00:13:59.464 "data_offset": 2048, 00:13:59.464 "data_size": 63488 00:13:59.464 }, 00:13:59.464 { 00:13:59.464 "name": "BaseBdev3", 00:13:59.464 "uuid": "7c12fd06-58f7-49b9-856e-81e3e5048aa7", 00:13:59.464 "is_configured": true, 00:13:59.464 "data_offset": 2048, 00:13:59.464 "data_size": 63488 00:13:59.464 }, 00:13:59.464 { 00:13:59.464 "name": "BaseBdev4", 00:13:59.464 "uuid": "c1cc2024-0710-4726-9edb-09744da63fad", 00:13:59.464 "is_configured": true, 00:13:59.464 "data_offset": 2048, 00:13:59.464 "data_size": 63488 00:13:59.464 } 00:13:59.464 ] 00:13:59.464 }' 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.464 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.035 [2024-11-02 23:53:53.858412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.035 "name": "Existed_Raid", 00:14:00.035 "uuid": "718aba95-bd97-47eb-a228-7f740f9144a4", 00:14:00.035 "strip_size_kb": 64, 00:14:00.035 "state": "configuring", 00:14:00.035 "raid_level": "raid5f", 00:14:00.035 "superblock": true, 00:14:00.035 "num_base_bdevs": 4, 00:14:00.035 "num_base_bdevs_discovered": 2, 00:14:00.035 "num_base_bdevs_operational": 4, 00:14:00.035 "base_bdevs_list": [ 00:14:00.035 { 00:14:00.035 "name": "BaseBdev1", 00:14:00.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.035 "is_configured": false, 00:14:00.035 "data_offset": 0, 00:14:00.035 "data_size": 0 00:14:00.035 }, 00:14:00.035 { 00:14:00.035 "name": null, 00:14:00.035 "uuid": "e7e8e43a-716c-4e01-87ec-4703e9689559", 00:14:00.035 "is_configured": false, 00:14:00.035 "data_offset": 0, 00:14:00.035 "data_size": 63488 00:14:00.035 }, 00:14:00.035 { 00:14:00.035 "name": "BaseBdev3", 00:14:00.035 "uuid": "7c12fd06-58f7-49b9-856e-81e3e5048aa7", 00:14:00.035 "is_configured": true, 00:14:00.035 "data_offset": 2048, 00:14:00.035 "data_size": 63488 00:14:00.035 }, 00:14:00.035 { 00:14:00.035 "name": "BaseBdev4", 00:14:00.035 "uuid": "c1cc2024-0710-4726-9edb-09744da63fad", 00:14:00.035 "is_configured": true, 00:14:00.035 "data_offset": 2048, 00:14:00.035 "data_size": 63488 00:14:00.035 } 00:14:00.035 ] 00:14:00.035 }' 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.035 23:53:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.295 [2024-11-02 23:53:54.328683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.295 BaseBdev1 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.295 [ 00:14:00.295 { 00:14:00.295 "name": "BaseBdev1", 00:14:00.295 "aliases": [ 00:14:00.295 "d516768a-2d49-4f83-a16d-12556e8cd68a" 00:14:00.295 ], 00:14:00.295 "product_name": "Malloc disk", 00:14:00.295 "block_size": 512, 00:14:00.295 "num_blocks": 65536, 00:14:00.295 "uuid": "d516768a-2d49-4f83-a16d-12556e8cd68a", 00:14:00.295 "assigned_rate_limits": { 00:14:00.295 "rw_ios_per_sec": 0, 00:14:00.295 "rw_mbytes_per_sec": 0, 00:14:00.295 "r_mbytes_per_sec": 0, 00:14:00.295 "w_mbytes_per_sec": 0 00:14:00.295 }, 00:14:00.295 "claimed": true, 00:14:00.295 "claim_type": "exclusive_write", 00:14:00.295 "zoned": false, 00:14:00.295 "supported_io_types": { 00:14:00.295 "read": true, 00:14:00.295 "write": true, 00:14:00.295 "unmap": true, 00:14:00.295 "flush": true, 00:14:00.295 "reset": true, 00:14:00.295 "nvme_admin": false, 00:14:00.295 "nvme_io": false, 00:14:00.295 "nvme_io_md": false, 00:14:00.295 "write_zeroes": true, 00:14:00.295 "zcopy": true, 00:14:00.295 "get_zone_info": false, 00:14:00.295 "zone_management": false, 00:14:00.295 "zone_append": false, 00:14:00.295 "compare": false, 00:14:00.295 "compare_and_write": false, 00:14:00.295 "abort": true, 00:14:00.295 "seek_hole": false, 00:14:00.295 "seek_data": false, 00:14:00.295 "copy": true, 00:14:00.295 "nvme_iov_md": false 00:14:00.295 }, 00:14:00.295 "memory_domains": [ 00:14:00.295 { 00:14:00.295 "dma_device_id": "system", 00:14:00.295 "dma_device_type": 1 00:14:00.295 }, 00:14:00.295 { 00:14:00.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.295 "dma_device_type": 2 00:14:00.295 } 00:14:00.295 ], 00:14:00.295 "driver_specific": {} 00:14:00.295 } 00:14:00.295 ] 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.295 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.296 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.296 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.296 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.296 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.555 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.555 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.555 "name": "Existed_Raid", 00:14:00.555 "uuid": "718aba95-bd97-47eb-a228-7f740f9144a4", 00:14:00.555 "strip_size_kb": 64, 00:14:00.555 "state": "configuring", 00:14:00.555 "raid_level": "raid5f", 00:14:00.555 "superblock": true, 00:14:00.555 "num_base_bdevs": 4, 00:14:00.555 "num_base_bdevs_discovered": 3, 00:14:00.555 "num_base_bdevs_operational": 4, 00:14:00.555 "base_bdevs_list": [ 00:14:00.555 { 00:14:00.555 "name": "BaseBdev1", 00:14:00.555 "uuid": "d516768a-2d49-4f83-a16d-12556e8cd68a", 00:14:00.555 "is_configured": true, 00:14:00.555 "data_offset": 2048, 00:14:00.555 "data_size": 63488 00:14:00.555 }, 00:14:00.555 { 00:14:00.555 "name": null, 00:14:00.555 "uuid": "e7e8e43a-716c-4e01-87ec-4703e9689559", 00:14:00.555 "is_configured": false, 00:14:00.555 "data_offset": 0, 00:14:00.555 "data_size": 63488 00:14:00.555 }, 00:14:00.555 { 00:14:00.555 "name": "BaseBdev3", 00:14:00.555 "uuid": "7c12fd06-58f7-49b9-856e-81e3e5048aa7", 00:14:00.555 "is_configured": true, 00:14:00.555 "data_offset": 2048, 00:14:00.555 "data_size": 63488 00:14:00.555 }, 00:14:00.555 { 00:14:00.555 "name": "BaseBdev4", 00:14:00.555 "uuid": "c1cc2024-0710-4726-9edb-09744da63fad", 00:14:00.555 "is_configured": true, 00:14:00.555 "data_offset": 2048, 00:14:00.555 "data_size": 63488 00:14:00.555 } 00:14:00.555 ] 00:14:00.555 }' 00:14:00.555 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.555 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.814 [2024-11-02 23:53:54.867840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.814 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.078 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.078 "name": "Existed_Raid", 00:14:01.078 "uuid": "718aba95-bd97-47eb-a228-7f740f9144a4", 00:14:01.078 "strip_size_kb": 64, 00:14:01.078 "state": "configuring", 00:14:01.078 "raid_level": "raid5f", 00:14:01.078 "superblock": true, 00:14:01.078 "num_base_bdevs": 4, 00:14:01.078 "num_base_bdevs_discovered": 2, 00:14:01.078 "num_base_bdevs_operational": 4, 00:14:01.078 "base_bdevs_list": [ 00:14:01.078 { 00:14:01.078 "name": "BaseBdev1", 00:14:01.078 "uuid": "d516768a-2d49-4f83-a16d-12556e8cd68a", 00:14:01.078 "is_configured": true, 00:14:01.078 "data_offset": 2048, 00:14:01.078 "data_size": 63488 00:14:01.078 }, 00:14:01.078 { 00:14:01.078 "name": null, 00:14:01.078 "uuid": "e7e8e43a-716c-4e01-87ec-4703e9689559", 00:14:01.078 "is_configured": false, 00:14:01.078 "data_offset": 0, 00:14:01.078 "data_size": 63488 00:14:01.078 }, 00:14:01.078 { 00:14:01.078 "name": null, 00:14:01.078 "uuid": "7c12fd06-58f7-49b9-856e-81e3e5048aa7", 00:14:01.078 "is_configured": false, 00:14:01.078 "data_offset": 0, 00:14:01.078 "data_size": 63488 00:14:01.078 }, 00:14:01.078 { 00:14:01.078 "name": "BaseBdev4", 00:14:01.078 "uuid": "c1cc2024-0710-4726-9edb-09744da63fad", 00:14:01.078 "is_configured": true, 00:14:01.078 "data_offset": 2048, 00:14:01.078 "data_size": 63488 00:14:01.078 } 00:14:01.078 ] 00:14:01.078 }' 00:14:01.078 23:53:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.078 23:53:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.342 [2024-11-02 23:53:55.366974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.342 "name": "Existed_Raid", 00:14:01.342 "uuid": "718aba95-bd97-47eb-a228-7f740f9144a4", 00:14:01.342 "strip_size_kb": 64, 00:14:01.342 "state": "configuring", 00:14:01.342 "raid_level": "raid5f", 00:14:01.342 "superblock": true, 00:14:01.342 "num_base_bdevs": 4, 00:14:01.342 "num_base_bdevs_discovered": 3, 00:14:01.342 "num_base_bdevs_operational": 4, 00:14:01.342 "base_bdevs_list": [ 00:14:01.342 { 00:14:01.342 "name": "BaseBdev1", 00:14:01.342 "uuid": "d516768a-2d49-4f83-a16d-12556e8cd68a", 00:14:01.342 "is_configured": true, 00:14:01.342 "data_offset": 2048, 00:14:01.342 "data_size": 63488 00:14:01.342 }, 00:14:01.342 { 00:14:01.342 "name": null, 00:14:01.342 "uuid": "e7e8e43a-716c-4e01-87ec-4703e9689559", 00:14:01.342 "is_configured": false, 00:14:01.342 "data_offset": 0, 00:14:01.342 "data_size": 63488 00:14:01.342 }, 00:14:01.342 { 00:14:01.342 "name": "BaseBdev3", 00:14:01.342 "uuid": "7c12fd06-58f7-49b9-856e-81e3e5048aa7", 00:14:01.342 "is_configured": true, 00:14:01.342 "data_offset": 2048, 00:14:01.342 "data_size": 63488 00:14:01.342 }, 00:14:01.342 { 00:14:01.342 "name": "BaseBdev4", 00:14:01.342 "uuid": "c1cc2024-0710-4726-9edb-09744da63fad", 00:14:01.342 "is_configured": true, 00:14:01.342 "data_offset": 2048, 00:14:01.342 "data_size": 63488 00:14:01.342 } 00:14:01.342 ] 00:14:01.342 }' 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.342 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.911 [2024-11-02 23:53:55.838272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.911 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.911 "name": "Existed_Raid", 00:14:01.911 "uuid": "718aba95-bd97-47eb-a228-7f740f9144a4", 00:14:01.911 "strip_size_kb": 64, 00:14:01.911 "state": "configuring", 00:14:01.912 "raid_level": "raid5f", 00:14:01.912 "superblock": true, 00:14:01.912 "num_base_bdevs": 4, 00:14:01.912 "num_base_bdevs_discovered": 2, 00:14:01.912 "num_base_bdevs_operational": 4, 00:14:01.912 "base_bdevs_list": [ 00:14:01.912 { 00:14:01.912 "name": null, 00:14:01.912 "uuid": "d516768a-2d49-4f83-a16d-12556e8cd68a", 00:14:01.912 "is_configured": false, 00:14:01.912 "data_offset": 0, 00:14:01.912 "data_size": 63488 00:14:01.912 }, 00:14:01.912 { 00:14:01.912 "name": null, 00:14:01.912 "uuid": "e7e8e43a-716c-4e01-87ec-4703e9689559", 00:14:01.912 "is_configured": false, 00:14:01.912 "data_offset": 0, 00:14:01.912 "data_size": 63488 00:14:01.912 }, 00:14:01.912 { 00:14:01.912 "name": "BaseBdev3", 00:14:01.912 "uuid": "7c12fd06-58f7-49b9-856e-81e3e5048aa7", 00:14:01.912 "is_configured": true, 00:14:01.912 "data_offset": 2048, 00:14:01.912 "data_size": 63488 00:14:01.912 }, 00:14:01.912 { 00:14:01.912 "name": "BaseBdev4", 00:14:01.912 "uuid": "c1cc2024-0710-4726-9edb-09744da63fad", 00:14:01.912 "is_configured": true, 00:14:01.912 "data_offset": 2048, 00:14:01.912 "data_size": 63488 00:14:01.912 } 00:14:01.912 ] 00:14:01.912 }' 00:14:01.912 23:53:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.912 23:53:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.531 [2024-11-02 23:53:56.331839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.531 "name": "Existed_Raid", 00:14:02.531 "uuid": "718aba95-bd97-47eb-a228-7f740f9144a4", 00:14:02.531 "strip_size_kb": 64, 00:14:02.531 "state": "configuring", 00:14:02.531 "raid_level": "raid5f", 00:14:02.531 "superblock": true, 00:14:02.531 "num_base_bdevs": 4, 00:14:02.531 "num_base_bdevs_discovered": 3, 00:14:02.531 "num_base_bdevs_operational": 4, 00:14:02.531 "base_bdevs_list": [ 00:14:02.531 { 00:14:02.531 "name": null, 00:14:02.531 "uuid": "d516768a-2d49-4f83-a16d-12556e8cd68a", 00:14:02.531 "is_configured": false, 00:14:02.531 "data_offset": 0, 00:14:02.531 "data_size": 63488 00:14:02.531 }, 00:14:02.531 { 00:14:02.531 "name": "BaseBdev2", 00:14:02.531 "uuid": "e7e8e43a-716c-4e01-87ec-4703e9689559", 00:14:02.531 "is_configured": true, 00:14:02.531 "data_offset": 2048, 00:14:02.531 "data_size": 63488 00:14:02.531 }, 00:14:02.531 { 00:14:02.531 "name": "BaseBdev3", 00:14:02.531 "uuid": "7c12fd06-58f7-49b9-856e-81e3e5048aa7", 00:14:02.531 "is_configured": true, 00:14:02.531 "data_offset": 2048, 00:14:02.531 "data_size": 63488 00:14:02.531 }, 00:14:02.531 { 00:14:02.531 "name": "BaseBdev4", 00:14:02.531 "uuid": "c1cc2024-0710-4726-9edb-09744da63fad", 00:14:02.531 "is_configured": true, 00:14:02.531 "data_offset": 2048, 00:14:02.531 "data_size": 63488 00:14:02.531 } 00:14:02.531 ] 00:14:02.531 }' 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.531 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.790 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d516768a-2d49-4f83-a16d-12556e8cd68a 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.791 [2024-11-02 23:53:56.853664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:02.791 [2024-11-02 23:53:56.853880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:02.791 [2024-11-02 23:53:56.853895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:02.791 NewBaseBdev 00:14:02.791 [2024-11-02 23:53:56.854163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:02.791 [2024-11-02 23:53:56.854622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:02.791 [2024-11-02 23:53:56.854637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:02.791 [2024-11-02 23:53:56.854732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.791 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.791 [ 00:14:02.791 { 00:14:02.791 "name": "NewBaseBdev", 00:14:02.791 "aliases": [ 00:14:02.791 "d516768a-2d49-4f83-a16d-12556e8cd68a" 00:14:02.791 ], 00:14:02.791 "product_name": "Malloc disk", 00:14:02.791 "block_size": 512, 00:14:02.791 "num_blocks": 65536, 00:14:02.791 "uuid": "d516768a-2d49-4f83-a16d-12556e8cd68a", 00:14:02.791 "assigned_rate_limits": { 00:14:02.791 "rw_ios_per_sec": 0, 00:14:02.791 "rw_mbytes_per_sec": 0, 00:14:02.791 "r_mbytes_per_sec": 0, 00:14:02.791 "w_mbytes_per_sec": 0 00:14:02.791 }, 00:14:02.791 "claimed": true, 00:14:03.051 "claim_type": "exclusive_write", 00:14:03.051 "zoned": false, 00:14:03.051 "supported_io_types": { 00:14:03.051 "read": true, 00:14:03.051 "write": true, 00:14:03.051 "unmap": true, 00:14:03.051 "flush": true, 00:14:03.051 "reset": true, 00:14:03.051 "nvme_admin": false, 00:14:03.051 "nvme_io": false, 00:14:03.051 "nvme_io_md": false, 00:14:03.051 "write_zeroes": true, 00:14:03.051 "zcopy": true, 00:14:03.051 "get_zone_info": false, 00:14:03.051 "zone_management": false, 00:14:03.051 "zone_append": false, 00:14:03.051 "compare": false, 00:14:03.051 "compare_and_write": false, 00:14:03.051 "abort": true, 00:14:03.051 "seek_hole": false, 00:14:03.051 "seek_data": false, 00:14:03.051 "copy": true, 00:14:03.051 "nvme_iov_md": false 00:14:03.051 }, 00:14:03.051 "memory_domains": [ 00:14:03.051 { 00:14:03.051 "dma_device_id": "system", 00:14:03.051 "dma_device_type": 1 00:14:03.051 }, 00:14:03.051 { 00:14:03.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.051 "dma_device_type": 2 00:14:03.051 } 00:14:03.051 ], 00:14:03.051 "driver_specific": {} 00:14:03.051 } 00:14:03.051 ] 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.051 "name": "Existed_Raid", 00:14:03.051 "uuid": "718aba95-bd97-47eb-a228-7f740f9144a4", 00:14:03.051 "strip_size_kb": 64, 00:14:03.051 "state": "online", 00:14:03.051 "raid_level": "raid5f", 00:14:03.051 "superblock": true, 00:14:03.051 "num_base_bdevs": 4, 00:14:03.051 "num_base_bdevs_discovered": 4, 00:14:03.051 "num_base_bdevs_operational": 4, 00:14:03.051 "base_bdevs_list": [ 00:14:03.051 { 00:14:03.051 "name": "NewBaseBdev", 00:14:03.051 "uuid": "d516768a-2d49-4f83-a16d-12556e8cd68a", 00:14:03.051 "is_configured": true, 00:14:03.051 "data_offset": 2048, 00:14:03.051 "data_size": 63488 00:14:03.051 }, 00:14:03.051 { 00:14:03.051 "name": "BaseBdev2", 00:14:03.051 "uuid": "e7e8e43a-716c-4e01-87ec-4703e9689559", 00:14:03.051 "is_configured": true, 00:14:03.051 "data_offset": 2048, 00:14:03.051 "data_size": 63488 00:14:03.051 }, 00:14:03.051 { 00:14:03.051 "name": "BaseBdev3", 00:14:03.051 "uuid": "7c12fd06-58f7-49b9-856e-81e3e5048aa7", 00:14:03.051 "is_configured": true, 00:14:03.051 "data_offset": 2048, 00:14:03.051 "data_size": 63488 00:14:03.051 }, 00:14:03.051 { 00:14:03.051 "name": "BaseBdev4", 00:14:03.051 "uuid": "c1cc2024-0710-4726-9edb-09744da63fad", 00:14:03.051 "is_configured": true, 00:14:03.051 "data_offset": 2048, 00:14:03.051 "data_size": 63488 00:14:03.051 } 00:14:03.051 ] 00:14:03.051 }' 00:14:03.051 23:53:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.052 23:53:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.312 [2024-11-02 23:53:57.289183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.312 "name": "Existed_Raid", 00:14:03.312 "aliases": [ 00:14:03.312 "718aba95-bd97-47eb-a228-7f740f9144a4" 00:14:03.312 ], 00:14:03.312 "product_name": "Raid Volume", 00:14:03.312 "block_size": 512, 00:14:03.312 "num_blocks": 190464, 00:14:03.312 "uuid": "718aba95-bd97-47eb-a228-7f740f9144a4", 00:14:03.312 "assigned_rate_limits": { 00:14:03.312 "rw_ios_per_sec": 0, 00:14:03.312 "rw_mbytes_per_sec": 0, 00:14:03.312 "r_mbytes_per_sec": 0, 00:14:03.312 "w_mbytes_per_sec": 0 00:14:03.312 }, 00:14:03.312 "claimed": false, 00:14:03.312 "zoned": false, 00:14:03.312 "supported_io_types": { 00:14:03.312 "read": true, 00:14:03.312 "write": true, 00:14:03.312 "unmap": false, 00:14:03.312 "flush": false, 00:14:03.312 "reset": true, 00:14:03.312 "nvme_admin": false, 00:14:03.312 "nvme_io": false, 00:14:03.312 "nvme_io_md": false, 00:14:03.312 "write_zeroes": true, 00:14:03.312 "zcopy": false, 00:14:03.312 "get_zone_info": false, 00:14:03.312 "zone_management": false, 00:14:03.312 "zone_append": false, 00:14:03.312 "compare": false, 00:14:03.312 "compare_and_write": false, 00:14:03.312 "abort": false, 00:14:03.312 "seek_hole": false, 00:14:03.312 "seek_data": false, 00:14:03.312 "copy": false, 00:14:03.312 "nvme_iov_md": false 00:14:03.312 }, 00:14:03.312 "driver_specific": { 00:14:03.312 "raid": { 00:14:03.312 "uuid": "718aba95-bd97-47eb-a228-7f740f9144a4", 00:14:03.312 "strip_size_kb": 64, 00:14:03.312 "state": "online", 00:14:03.312 "raid_level": "raid5f", 00:14:03.312 "superblock": true, 00:14:03.312 "num_base_bdevs": 4, 00:14:03.312 "num_base_bdevs_discovered": 4, 00:14:03.312 "num_base_bdevs_operational": 4, 00:14:03.312 "base_bdevs_list": [ 00:14:03.312 { 00:14:03.312 "name": "NewBaseBdev", 00:14:03.312 "uuid": "d516768a-2d49-4f83-a16d-12556e8cd68a", 00:14:03.312 "is_configured": true, 00:14:03.312 "data_offset": 2048, 00:14:03.312 "data_size": 63488 00:14:03.312 }, 00:14:03.312 { 00:14:03.312 "name": "BaseBdev2", 00:14:03.312 "uuid": "e7e8e43a-716c-4e01-87ec-4703e9689559", 00:14:03.312 "is_configured": true, 00:14:03.312 "data_offset": 2048, 00:14:03.312 "data_size": 63488 00:14:03.312 }, 00:14:03.312 { 00:14:03.312 "name": "BaseBdev3", 00:14:03.312 "uuid": "7c12fd06-58f7-49b9-856e-81e3e5048aa7", 00:14:03.312 "is_configured": true, 00:14:03.312 "data_offset": 2048, 00:14:03.312 "data_size": 63488 00:14:03.312 }, 00:14:03.312 { 00:14:03.312 "name": "BaseBdev4", 00:14:03.312 "uuid": "c1cc2024-0710-4726-9edb-09744da63fad", 00:14:03.312 "is_configured": true, 00:14:03.312 "data_offset": 2048, 00:14:03.312 "data_size": 63488 00:14:03.312 } 00:14:03.312 ] 00:14:03.312 } 00:14:03.312 } 00:14:03.312 }' 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:03.312 BaseBdev2 00:14:03.312 BaseBdev3 00:14:03.312 BaseBdev4' 00:14:03.312 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.572 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.573 [2024-11-02 23:53:57.584427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.573 [2024-11-02 23:53:57.584506] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.573 [2024-11-02 23:53:57.584579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.573 [2024-11-02 23:53:57.584859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.573 [2024-11-02 23:53:57.584870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93689 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 93689 ']' 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 93689 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 93689 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:03.573 killing process with pid 93689 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 93689' 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 93689 00:14:03.573 [2024-11-02 23:53:57.636204] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.573 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 93689 00:14:03.833 [2024-11-02 23:53:57.676507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.833 23:53:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:03.833 00:14:03.833 real 0m9.405s 00:14:03.833 user 0m16.026s 00:14:03.833 sys 0m2.140s 00:14:03.833 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:03.833 ************************************ 00:14:03.833 END TEST raid5f_state_function_test_sb 00:14:03.833 ************************************ 00:14:03.833 23:53:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.103 23:53:57 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:04.103 23:53:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:04.103 23:53:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:04.103 23:53:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.103 ************************************ 00:14:04.103 START TEST raid5f_superblock_test 00:14:04.103 ************************************ 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94337 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94337 00:14:04.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 94337 ']' 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:04.103 23:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.103 [2024-11-02 23:53:58.045556] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:14:04.103 [2024-11-02 23:53:58.045804] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94337 ] 00:14:04.103 [2024-11-02 23:53:58.181144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.367 [2024-11-02 23:53:58.207317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.368 [2024-11-02 23:53:58.248734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.368 [2024-11-02 23:53:58.248870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.938 malloc1 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.938 [2024-11-02 23:53:58.886069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.938 [2024-11-02 23:53:58.886207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.938 [2024-11-02 23:53:58.886242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:04.938 [2024-11-02 23:53:58.886274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.938 [2024-11-02 23:53:58.888344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.938 [2024-11-02 23:53:58.888436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.938 pt1 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.938 malloc2 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.938 [2024-11-02 23:53:58.918456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.938 [2024-11-02 23:53:58.918516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.938 [2024-11-02 23:53:58.918531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:04.938 [2024-11-02 23:53:58.918541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.938 [2024-11-02 23:53:58.920640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.938 [2024-11-02 23:53:58.920676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.938 pt2 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.938 malloc3 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.938 [2024-11-02 23:53:58.946897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:04.938 [2024-11-02 23:53:58.947013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.938 [2024-11-02 23:53:58.947053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:04.938 [2024-11-02 23:53:58.947087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.938 [2024-11-02 23:53:58.949149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.938 [2024-11-02 23:53:58.949221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:04.938 pt3 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.938 malloc4 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.938 [2024-11-02 23:53:58.987027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:04.938 [2024-11-02 23:53:58.987133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.938 [2024-11-02 23:53:58.987165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:04.938 [2024-11-02 23:53:58.987201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.938 [2024-11-02 23:53:58.989332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.938 [2024-11-02 23:53:58.989405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:04.938 pt4 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.938 23:53:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:04.939 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.939 23:53:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.939 [2024-11-02 23:53:58.999033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:04.939 [2024-11-02 23:53:59.000883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.939 [2024-11-02 23:53:59.001003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:04.939 [2024-11-02 23:53:59.001082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:04.939 [2024-11-02 23:53:59.001282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:04.939 [2024-11-02 23:53:59.001337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:04.939 [2024-11-02 23:53:59.001607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:04.939 [2024-11-02 23:53:59.002137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:04.939 [2024-11-02 23:53:59.002200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:04.939 [2024-11-02 23:53:59.002352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.939 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.199 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.199 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.199 "name": "raid_bdev1", 00:14:05.199 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:05.199 "strip_size_kb": 64, 00:14:05.199 "state": "online", 00:14:05.199 "raid_level": "raid5f", 00:14:05.199 "superblock": true, 00:14:05.199 "num_base_bdevs": 4, 00:14:05.199 "num_base_bdevs_discovered": 4, 00:14:05.199 "num_base_bdevs_operational": 4, 00:14:05.199 "base_bdevs_list": [ 00:14:05.199 { 00:14:05.199 "name": "pt1", 00:14:05.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.199 "is_configured": true, 00:14:05.199 "data_offset": 2048, 00:14:05.199 "data_size": 63488 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "name": "pt2", 00:14:05.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.199 "is_configured": true, 00:14:05.199 "data_offset": 2048, 00:14:05.199 "data_size": 63488 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "name": "pt3", 00:14:05.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.199 "is_configured": true, 00:14:05.199 "data_offset": 2048, 00:14:05.199 "data_size": 63488 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "name": "pt4", 00:14:05.199 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.199 "is_configured": true, 00:14:05.199 "data_offset": 2048, 00:14:05.199 "data_size": 63488 00:14:05.199 } 00:14:05.199 ] 00:14:05.199 }' 00:14:05.199 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.199 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.459 [2024-11-02 23:53:59.483528] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.459 "name": "raid_bdev1", 00:14:05.459 "aliases": [ 00:14:05.459 "71fac00d-b812-4cea-b666-52eeec732cdc" 00:14:05.459 ], 00:14:05.459 "product_name": "Raid Volume", 00:14:05.459 "block_size": 512, 00:14:05.459 "num_blocks": 190464, 00:14:05.459 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:05.459 "assigned_rate_limits": { 00:14:05.459 "rw_ios_per_sec": 0, 00:14:05.459 "rw_mbytes_per_sec": 0, 00:14:05.459 "r_mbytes_per_sec": 0, 00:14:05.459 "w_mbytes_per_sec": 0 00:14:05.459 }, 00:14:05.459 "claimed": false, 00:14:05.459 "zoned": false, 00:14:05.459 "supported_io_types": { 00:14:05.459 "read": true, 00:14:05.459 "write": true, 00:14:05.459 "unmap": false, 00:14:05.459 "flush": false, 00:14:05.459 "reset": true, 00:14:05.459 "nvme_admin": false, 00:14:05.459 "nvme_io": false, 00:14:05.459 "nvme_io_md": false, 00:14:05.459 "write_zeroes": true, 00:14:05.459 "zcopy": false, 00:14:05.459 "get_zone_info": false, 00:14:05.459 "zone_management": false, 00:14:05.459 "zone_append": false, 00:14:05.459 "compare": false, 00:14:05.459 "compare_and_write": false, 00:14:05.459 "abort": false, 00:14:05.459 "seek_hole": false, 00:14:05.459 "seek_data": false, 00:14:05.459 "copy": false, 00:14:05.459 "nvme_iov_md": false 00:14:05.459 }, 00:14:05.459 "driver_specific": { 00:14:05.459 "raid": { 00:14:05.459 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:05.459 "strip_size_kb": 64, 00:14:05.459 "state": "online", 00:14:05.459 "raid_level": "raid5f", 00:14:05.459 "superblock": true, 00:14:05.459 "num_base_bdevs": 4, 00:14:05.459 "num_base_bdevs_discovered": 4, 00:14:05.459 "num_base_bdevs_operational": 4, 00:14:05.459 "base_bdevs_list": [ 00:14:05.459 { 00:14:05.459 "name": "pt1", 00:14:05.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.459 "is_configured": true, 00:14:05.459 "data_offset": 2048, 00:14:05.459 "data_size": 63488 00:14:05.459 }, 00:14:05.459 { 00:14:05.459 "name": "pt2", 00:14:05.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.459 "is_configured": true, 00:14:05.459 "data_offset": 2048, 00:14:05.459 "data_size": 63488 00:14:05.459 }, 00:14:05.459 { 00:14:05.459 "name": "pt3", 00:14:05.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.459 "is_configured": true, 00:14:05.459 "data_offset": 2048, 00:14:05.459 "data_size": 63488 00:14:05.459 }, 00:14:05.459 { 00:14:05.459 "name": "pt4", 00:14:05.459 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.459 "is_configured": true, 00:14:05.459 "data_offset": 2048, 00:14:05.459 "data_size": 63488 00:14:05.459 } 00:14:05.459 ] 00:14:05.459 } 00:14:05.459 } 00:14:05.459 }' 00:14:05.459 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:05.718 pt2 00:14:05.718 pt3 00:14:05.718 pt4' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.718 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:05.718 [2024-11-02 23:53:59.794960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=71fac00d-b812-4cea-b666-52eeec732cdc 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 71fac00d-b812-4cea-b666-52eeec732cdc ']' 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.978 [2024-11-02 23:53:59.842713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.978 [2024-11-02 23:53:59.842762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.978 [2024-11-02 23:53:59.842845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.978 [2024-11-02 23:53:59.842931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.978 [2024-11-02 23:53:59.842946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:05.978 23:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.978 [2024-11-02 23:54:00.010511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:05.978 [2024-11-02 23:54:00.012406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:05.978 [2024-11-02 23:54:00.012495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:05.978 [2024-11-02 23:54:00.012540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:05.978 [2024-11-02 23:54:00.012626] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:05.978 [2024-11-02 23:54:00.012706] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:05.978 [2024-11-02 23:54:00.012775] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:05.978 [2024-11-02 23:54:00.012837] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:05.978 [2024-11-02 23:54:00.012853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.978 [2024-11-02 23:54:00.012864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:05.978 request: 00:14:05.978 { 00:14:05.978 "name": "raid_bdev1", 00:14:05.978 "raid_level": "raid5f", 00:14:05.978 "base_bdevs": [ 00:14:05.978 "malloc1", 00:14:05.978 "malloc2", 00:14:05.978 "malloc3", 00:14:05.978 "malloc4" 00:14:05.978 ], 00:14:05.978 "strip_size_kb": 64, 00:14:05.978 "superblock": false, 00:14:05.978 "method": "bdev_raid_create", 00:14:05.978 "req_id": 1 00:14:05.978 } 00:14:05.978 Got JSON-RPC error response 00:14:05.978 response: 00:14:05.978 { 00:14:05.978 "code": -17, 00:14:05.978 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:05.978 } 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.978 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.978 [2024-11-02 23:54:00.066382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.978 [2024-11-02 23:54:00.066502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.978 [2024-11-02 23:54:00.066541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:05.978 [2024-11-02 23:54:00.066571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.978 [2024-11-02 23:54:00.068768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.978 [2024-11-02 23:54:00.068850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:05.978 [2024-11-02 23:54:00.068944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:05.978 [2024-11-02 23:54:00.069016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:06.238 pt1 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.238 "name": "raid_bdev1", 00:14:06.238 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:06.238 "strip_size_kb": 64, 00:14:06.238 "state": "configuring", 00:14:06.238 "raid_level": "raid5f", 00:14:06.238 "superblock": true, 00:14:06.238 "num_base_bdevs": 4, 00:14:06.238 "num_base_bdevs_discovered": 1, 00:14:06.238 "num_base_bdevs_operational": 4, 00:14:06.238 "base_bdevs_list": [ 00:14:06.238 { 00:14:06.238 "name": "pt1", 00:14:06.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.238 "is_configured": true, 00:14:06.238 "data_offset": 2048, 00:14:06.238 "data_size": 63488 00:14:06.238 }, 00:14:06.238 { 00:14:06.238 "name": null, 00:14:06.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.238 "is_configured": false, 00:14:06.238 "data_offset": 2048, 00:14:06.238 "data_size": 63488 00:14:06.238 }, 00:14:06.238 { 00:14:06.238 "name": null, 00:14:06.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.238 "is_configured": false, 00:14:06.238 "data_offset": 2048, 00:14:06.238 "data_size": 63488 00:14:06.238 }, 00:14:06.238 { 00:14:06.238 "name": null, 00:14:06.238 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.238 "is_configured": false, 00:14:06.238 "data_offset": 2048, 00:14:06.238 "data_size": 63488 00:14:06.238 } 00:14:06.238 ] 00:14:06.238 }' 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.238 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.497 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:06.497 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.497 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.497 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.497 [2024-11-02 23:54:00.517631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.497 [2024-11-02 23:54:00.517701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.497 [2024-11-02 23:54:00.517722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:06.497 [2024-11-02 23:54:00.517731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.497 [2024-11-02 23:54:00.518156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.497 [2024-11-02 23:54:00.518174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.498 [2024-11-02 23:54:00.518249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:06.498 [2024-11-02 23:54:00.518271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.498 pt2 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.498 [2024-11-02 23:54:00.529623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.498 "name": "raid_bdev1", 00:14:06.498 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:06.498 "strip_size_kb": 64, 00:14:06.498 "state": "configuring", 00:14:06.498 "raid_level": "raid5f", 00:14:06.498 "superblock": true, 00:14:06.498 "num_base_bdevs": 4, 00:14:06.498 "num_base_bdevs_discovered": 1, 00:14:06.498 "num_base_bdevs_operational": 4, 00:14:06.498 "base_bdevs_list": [ 00:14:06.498 { 00:14:06.498 "name": "pt1", 00:14:06.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.498 "is_configured": true, 00:14:06.498 "data_offset": 2048, 00:14:06.498 "data_size": 63488 00:14:06.498 }, 00:14:06.498 { 00:14:06.498 "name": null, 00:14:06.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.498 "is_configured": false, 00:14:06.498 "data_offset": 0, 00:14:06.498 "data_size": 63488 00:14:06.498 }, 00:14:06.498 { 00:14:06.498 "name": null, 00:14:06.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.498 "is_configured": false, 00:14:06.498 "data_offset": 2048, 00:14:06.498 "data_size": 63488 00:14:06.498 }, 00:14:06.498 { 00:14:06.498 "name": null, 00:14:06.498 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.498 "is_configured": false, 00:14:06.498 "data_offset": 2048, 00:14:06.498 "data_size": 63488 00:14:06.498 } 00:14:06.498 ] 00:14:06.498 }' 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.498 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.068 [2024-11-02 23:54:00.952909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:07.068 [2024-11-02 23:54:00.953066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.068 [2024-11-02 23:54:00.953107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:07.068 [2024-11-02 23:54:00.953137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.068 [2024-11-02 23:54:00.953552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.068 [2024-11-02 23:54:00.953611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:07.068 [2024-11-02 23:54:00.953710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:07.068 [2024-11-02 23:54:00.953777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:07.068 pt2 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.068 [2024-11-02 23:54:00.964846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:07.068 [2024-11-02 23:54:00.964950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.068 [2024-11-02 23:54:00.964983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:07.068 [2024-11-02 23:54:00.965011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.068 [2024-11-02 23:54:00.965346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.068 [2024-11-02 23:54:00.965403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:07.068 [2024-11-02 23:54:00.965480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:07.068 [2024-11-02 23:54:00.965526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:07.068 pt3 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.068 [2024-11-02 23:54:00.976853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:07.068 [2024-11-02 23:54:00.976902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.068 [2024-11-02 23:54:00.976916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:07.068 [2024-11-02 23:54:00.976925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.068 [2024-11-02 23:54:00.977190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.068 [2024-11-02 23:54:00.977208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:07.068 [2024-11-02 23:54:00.977253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:07.068 [2024-11-02 23:54:00.977270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:07.068 [2024-11-02 23:54:00.977361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:07.068 [2024-11-02 23:54:00.977372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:07.068 [2024-11-02 23:54:00.977590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:07.068 [2024-11-02 23:54:00.978043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:07.068 [2024-11-02 23:54:00.978054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:07.068 [2024-11-02 23:54:00.978148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.068 pt4 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.068 23:54:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.068 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.068 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.068 "name": "raid_bdev1", 00:14:07.068 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:07.068 "strip_size_kb": 64, 00:14:07.068 "state": "online", 00:14:07.068 "raid_level": "raid5f", 00:14:07.068 "superblock": true, 00:14:07.068 "num_base_bdevs": 4, 00:14:07.068 "num_base_bdevs_discovered": 4, 00:14:07.068 "num_base_bdevs_operational": 4, 00:14:07.068 "base_bdevs_list": [ 00:14:07.068 { 00:14:07.068 "name": "pt1", 00:14:07.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.068 "is_configured": true, 00:14:07.068 "data_offset": 2048, 00:14:07.068 "data_size": 63488 00:14:07.068 }, 00:14:07.068 { 00:14:07.068 "name": "pt2", 00:14:07.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.068 "is_configured": true, 00:14:07.068 "data_offset": 2048, 00:14:07.068 "data_size": 63488 00:14:07.068 }, 00:14:07.068 { 00:14:07.068 "name": "pt3", 00:14:07.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.068 "is_configured": true, 00:14:07.068 "data_offset": 2048, 00:14:07.068 "data_size": 63488 00:14:07.068 }, 00:14:07.068 { 00:14:07.068 "name": "pt4", 00:14:07.068 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.068 "is_configured": true, 00:14:07.068 "data_offset": 2048, 00:14:07.068 "data_size": 63488 00:14:07.068 } 00:14:07.068 ] 00:14:07.068 }' 00:14:07.069 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.069 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.637 [2024-11-02 23:54:01.452232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.637 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:07.637 "name": "raid_bdev1", 00:14:07.637 "aliases": [ 00:14:07.637 "71fac00d-b812-4cea-b666-52eeec732cdc" 00:14:07.637 ], 00:14:07.638 "product_name": "Raid Volume", 00:14:07.638 "block_size": 512, 00:14:07.638 "num_blocks": 190464, 00:14:07.638 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:07.638 "assigned_rate_limits": { 00:14:07.638 "rw_ios_per_sec": 0, 00:14:07.638 "rw_mbytes_per_sec": 0, 00:14:07.638 "r_mbytes_per_sec": 0, 00:14:07.638 "w_mbytes_per_sec": 0 00:14:07.638 }, 00:14:07.638 "claimed": false, 00:14:07.638 "zoned": false, 00:14:07.638 "supported_io_types": { 00:14:07.638 "read": true, 00:14:07.638 "write": true, 00:14:07.638 "unmap": false, 00:14:07.638 "flush": false, 00:14:07.638 "reset": true, 00:14:07.638 "nvme_admin": false, 00:14:07.638 "nvme_io": false, 00:14:07.638 "nvme_io_md": false, 00:14:07.638 "write_zeroes": true, 00:14:07.638 "zcopy": false, 00:14:07.638 "get_zone_info": false, 00:14:07.638 "zone_management": false, 00:14:07.638 "zone_append": false, 00:14:07.638 "compare": false, 00:14:07.638 "compare_and_write": false, 00:14:07.638 "abort": false, 00:14:07.638 "seek_hole": false, 00:14:07.638 "seek_data": false, 00:14:07.638 "copy": false, 00:14:07.638 "nvme_iov_md": false 00:14:07.638 }, 00:14:07.638 "driver_specific": { 00:14:07.638 "raid": { 00:14:07.638 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:07.638 "strip_size_kb": 64, 00:14:07.638 "state": "online", 00:14:07.638 "raid_level": "raid5f", 00:14:07.638 "superblock": true, 00:14:07.638 "num_base_bdevs": 4, 00:14:07.638 "num_base_bdevs_discovered": 4, 00:14:07.638 "num_base_bdevs_operational": 4, 00:14:07.638 "base_bdevs_list": [ 00:14:07.638 { 00:14:07.638 "name": "pt1", 00:14:07.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.638 "is_configured": true, 00:14:07.638 "data_offset": 2048, 00:14:07.638 "data_size": 63488 00:14:07.638 }, 00:14:07.638 { 00:14:07.638 "name": "pt2", 00:14:07.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.638 "is_configured": true, 00:14:07.638 "data_offset": 2048, 00:14:07.638 "data_size": 63488 00:14:07.638 }, 00:14:07.638 { 00:14:07.638 "name": "pt3", 00:14:07.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.638 "is_configured": true, 00:14:07.638 "data_offset": 2048, 00:14:07.638 "data_size": 63488 00:14:07.638 }, 00:14:07.638 { 00:14:07.638 "name": "pt4", 00:14:07.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.638 "is_configured": true, 00:14:07.638 "data_offset": 2048, 00:14:07.638 "data_size": 63488 00:14:07.638 } 00:14:07.638 ] 00:14:07.638 } 00:14:07.638 } 00:14:07.638 }' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:07.638 pt2 00:14:07.638 pt3 00:14:07.638 pt4' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.638 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.898 [2024-11-02 23:54:01.767658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 71fac00d-b812-4cea-b666-52eeec732cdc '!=' 71fac00d-b812-4cea-b666-52eeec732cdc ']' 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.898 [2024-11-02 23:54:01.815435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.898 "name": "raid_bdev1", 00:14:07.898 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:07.898 "strip_size_kb": 64, 00:14:07.898 "state": "online", 00:14:07.898 "raid_level": "raid5f", 00:14:07.898 "superblock": true, 00:14:07.898 "num_base_bdevs": 4, 00:14:07.898 "num_base_bdevs_discovered": 3, 00:14:07.898 "num_base_bdevs_operational": 3, 00:14:07.898 "base_bdevs_list": [ 00:14:07.898 { 00:14:07.898 "name": null, 00:14:07.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.898 "is_configured": false, 00:14:07.898 "data_offset": 0, 00:14:07.898 "data_size": 63488 00:14:07.898 }, 00:14:07.898 { 00:14:07.898 "name": "pt2", 00:14:07.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.898 "is_configured": true, 00:14:07.898 "data_offset": 2048, 00:14:07.898 "data_size": 63488 00:14:07.898 }, 00:14:07.898 { 00:14:07.898 "name": "pt3", 00:14:07.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.898 "is_configured": true, 00:14:07.898 "data_offset": 2048, 00:14:07.898 "data_size": 63488 00:14:07.898 }, 00:14:07.898 { 00:14:07.898 "name": "pt4", 00:14:07.898 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.898 "is_configured": true, 00:14:07.898 "data_offset": 2048, 00:14:07.898 "data_size": 63488 00:14:07.898 } 00:14:07.898 ] 00:14:07.898 }' 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.898 23:54:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.468 [2024-11-02 23:54:02.318557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.468 [2024-11-02 23:54:02.318653] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.468 [2024-11-02 23:54:02.318784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.468 [2024-11-02 23:54:02.318897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.468 [2024-11-02 23:54:02.318959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.468 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.468 [2024-11-02 23:54:02.414356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:08.468 [2024-11-02 23:54:02.414456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.468 [2024-11-02 23:54:02.414511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:08.469 [2024-11-02 23:54:02.414540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.469 [2024-11-02 23:54:02.416721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.469 [2024-11-02 23:54:02.416832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:08.469 [2024-11-02 23:54:02.416922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:08.469 [2024-11-02 23:54:02.416991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:08.469 pt2 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.469 "name": "raid_bdev1", 00:14:08.469 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:08.469 "strip_size_kb": 64, 00:14:08.469 "state": "configuring", 00:14:08.469 "raid_level": "raid5f", 00:14:08.469 "superblock": true, 00:14:08.469 "num_base_bdevs": 4, 00:14:08.469 "num_base_bdevs_discovered": 1, 00:14:08.469 "num_base_bdevs_operational": 3, 00:14:08.469 "base_bdevs_list": [ 00:14:08.469 { 00:14:08.469 "name": null, 00:14:08.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.469 "is_configured": false, 00:14:08.469 "data_offset": 2048, 00:14:08.469 "data_size": 63488 00:14:08.469 }, 00:14:08.469 { 00:14:08.469 "name": "pt2", 00:14:08.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.469 "is_configured": true, 00:14:08.469 "data_offset": 2048, 00:14:08.469 "data_size": 63488 00:14:08.469 }, 00:14:08.469 { 00:14:08.469 "name": null, 00:14:08.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.469 "is_configured": false, 00:14:08.469 "data_offset": 2048, 00:14:08.469 "data_size": 63488 00:14:08.469 }, 00:14:08.469 { 00:14:08.469 "name": null, 00:14:08.469 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:08.469 "is_configured": false, 00:14:08.469 "data_offset": 2048, 00:14:08.469 "data_size": 63488 00:14:08.469 } 00:14:08.469 ] 00:14:08.469 }' 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.469 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.729 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:08.729 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:08.729 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:08.729 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.729 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.729 [2024-11-02 23:54:02.817738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:08.729 [2024-11-02 23:54:02.817825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.729 [2024-11-02 23:54:02.817848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:08.729 [2024-11-02 23:54:02.817862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.729 [2024-11-02 23:54:02.818254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.729 [2024-11-02 23:54:02.818275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:08.729 [2024-11-02 23:54:02.818348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:08.729 [2024-11-02 23:54:02.818373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:08.988 pt3 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.988 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.988 "name": "raid_bdev1", 00:14:08.988 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:08.988 "strip_size_kb": 64, 00:14:08.989 "state": "configuring", 00:14:08.989 "raid_level": "raid5f", 00:14:08.989 "superblock": true, 00:14:08.989 "num_base_bdevs": 4, 00:14:08.989 "num_base_bdevs_discovered": 2, 00:14:08.989 "num_base_bdevs_operational": 3, 00:14:08.989 "base_bdevs_list": [ 00:14:08.989 { 00:14:08.989 "name": null, 00:14:08.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.989 "is_configured": false, 00:14:08.989 "data_offset": 2048, 00:14:08.989 "data_size": 63488 00:14:08.989 }, 00:14:08.989 { 00:14:08.989 "name": "pt2", 00:14:08.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.989 "is_configured": true, 00:14:08.989 "data_offset": 2048, 00:14:08.989 "data_size": 63488 00:14:08.989 }, 00:14:08.989 { 00:14:08.989 "name": "pt3", 00:14:08.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.989 "is_configured": true, 00:14:08.989 "data_offset": 2048, 00:14:08.989 "data_size": 63488 00:14:08.989 }, 00:14:08.989 { 00:14:08.989 "name": null, 00:14:08.989 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:08.989 "is_configured": false, 00:14:08.989 "data_offset": 2048, 00:14:08.989 "data_size": 63488 00:14:08.989 } 00:14:08.989 ] 00:14:08.989 }' 00:14:08.989 23:54:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.989 23:54:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.250 [2024-11-02 23:54:03.264937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:09.250 [2024-11-02 23:54:03.265076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.250 [2024-11-02 23:54:03.265116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:09.250 [2024-11-02 23:54:03.265146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.250 [2024-11-02 23:54:03.265571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.250 [2024-11-02 23:54:03.265632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:09.250 [2024-11-02 23:54:03.265747] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:09.250 [2024-11-02 23:54:03.265802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:09.250 [2024-11-02 23:54:03.265929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:09.250 [2024-11-02 23:54:03.265969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:09.250 [2024-11-02 23:54:03.266223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:09.250 [2024-11-02 23:54:03.266799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:09.250 [2024-11-02 23:54:03.266848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:09.250 [2024-11-02 23:54:03.267115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.250 pt4 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.250 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.250 "name": "raid_bdev1", 00:14:09.250 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:09.250 "strip_size_kb": 64, 00:14:09.250 "state": "online", 00:14:09.250 "raid_level": "raid5f", 00:14:09.250 "superblock": true, 00:14:09.250 "num_base_bdevs": 4, 00:14:09.250 "num_base_bdevs_discovered": 3, 00:14:09.250 "num_base_bdevs_operational": 3, 00:14:09.250 "base_bdevs_list": [ 00:14:09.250 { 00:14:09.250 "name": null, 00:14:09.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.251 "is_configured": false, 00:14:09.251 "data_offset": 2048, 00:14:09.251 "data_size": 63488 00:14:09.251 }, 00:14:09.251 { 00:14:09.251 "name": "pt2", 00:14:09.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.251 "is_configured": true, 00:14:09.251 "data_offset": 2048, 00:14:09.251 "data_size": 63488 00:14:09.251 }, 00:14:09.251 { 00:14:09.251 "name": "pt3", 00:14:09.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.251 "is_configured": true, 00:14:09.251 "data_offset": 2048, 00:14:09.251 "data_size": 63488 00:14:09.251 }, 00:14:09.251 { 00:14:09.251 "name": "pt4", 00:14:09.251 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:09.251 "is_configured": true, 00:14:09.251 "data_offset": 2048, 00:14:09.251 "data_size": 63488 00:14:09.251 } 00:14:09.251 ] 00:14:09.251 }' 00:14:09.251 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.251 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.820 [2024-11-02 23:54:03.724161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:09.820 [2024-11-02 23:54:03.724194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.820 [2024-11-02 23:54:03.724270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.820 [2024-11-02 23:54:03.724348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.820 [2024-11-02 23:54:03.724359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.820 [2024-11-02 23:54:03.800009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:09.820 [2024-11-02 23:54:03.800111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.820 [2024-11-02 23:54:03.800146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:09.820 [2024-11-02 23:54:03.800172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.820 [2024-11-02 23:54:03.802386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.820 [2024-11-02 23:54:03.802459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:09.820 [2024-11-02 23:54:03.802578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:09.820 [2024-11-02 23:54:03.802633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:09.820 [2024-11-02 23:54:03.802772] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:09.820 [2024-11-02 23:54:03.802841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:09.820 [2024-11-02 23:54:03.802880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:09.820 [2024-11-02 23:54:03.802969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:09.820 [2024-11-02 23:54:03.803116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:09.820 pt1 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.820 "name": "raid_bdev1", 00:14:09.820 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:09.820 "strip_size_kb": 64, 00:14:09.820 "state": "configuring", 00:14:09.820 "raid_level": "raid5f", 00:14:09.820 "superblock": true, 00:14:09.820 "num_base_bdevs": 4, 00:14:09.820 "num_base_bdevs_discovered": 2, 00:14:09.820 "num_base_bdevs_operational": 3, 00:14:09.820 "base_bdevs_list": [ 00:14:09.820 { 00:14:09.820 "name": null, 00:14:09.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.820 "is_configured": false, 00:14:09.820 "data_offset": 2048, 00:14:09.820 "data_size": 63488 00:14:09.820 }, 00:14:09.820 { 00:14:09.820 "name": "pt2", 00:14:09.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.820 "is_configured": true, 00:14:09.820 "data_offset": 2048, 00:14:09.820 "data_size": 63488 00:14:09.820 }, 00:14:09.820 { 00:14:09.820 "name": "pt3", 00:14:09.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.820 "is_configured": true, 00:14:09.820 "data_offset": 2048, 00:14:09.820 "data_size": 63488 00:14:09.820 }, 00:14:09.820 { 00:14:09.820 "name": null, 00:14:09.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:09.820 "is_configured": false, 00:14:09.820 "data_offset": 2048, 00:14:09.820 "data_size": 63488 00:14:09.820 } 00:14:09.820 ] 00:14:09.820 }' 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.820 23:54:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.390 [2024-11-02 23:54:04.319122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:10.390 [2024-11-02 23:54:04.319237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.390 [2024-11-02 23:54:04.319274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:10.390 [2024-11-02 23:54:04.319335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.390 [2024-11-02 23:54:04.319763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.390 [2024-11-02 23:54:04.319823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:10.390 [2024-11-02 23:54:04.319924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:10.390 [2024-11-02 23:54:04.319985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:10.390 [2024-11-02 23:54:04.320125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:10.390 [2024-11-02 23:54:04.320166] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:10.390 [2024-11-02 23:54:04.320405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:10.390 [2024-11-02 23:54:04.320989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:10.390 [2024-11-02 23:54:04.321045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:10.390 [2024-11-02 23:54:04.321266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.390 pt4 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.390 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.390 "name": "raid_bdev1", 00:14:10.390 "uuid": "71fac00d-b812-4cea-b666-52eeec732cdc", 00:14:10.390 "strip_size_kb": 64, 00:14:10.390 "state": "online", 00:14:10.390 "raid_level": "raid5f", 00:14:10.390 "superblock": true, 00:14:10.390 "num_base_bdevs": 4, 00:14:10.390 "num_base_bdevs_discovered": 3, 00:14:10.390 "num_base_bdevs_operational": 3, 00:14:10.390 "base_bdevs_list": [ 00:14:10.390 { 00:14:10.390 "name": null, 00:14:10.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.390 "is_configured": false, 00:14:10.390 "data_offset": 2048, 00:14:10.390 "data_size": 63488 00:14:10.390 }, 00:14:10.390 { 00:14:10.390 "name": "pt2", 00:14:10.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.390 "is_configured": true, 00:14:10.390 "data_offset": 2048, 00:14:10.390 "data_size": 63488 00:14:10.390 }, 00:14:10.390 { 00:14:10.390 "name": "pt3", 00:14:10.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.390 "is_configured": true, 00:14:10.390 "data_offset": 2048, 00:14:10.390 "data_size": 63488 00:14:10.390 }, 00:14:10.390 { 00:14:10.390 "name": "pt4", 00:14:10.390 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:10.390 "is_configured": true, 00:14:10.390 "data_offset": 2048, 00:14:10.390 "data_size": 63488 00:14:10.390 } 00:14:10.390 ] 00:14:10.390 }' 00:14:10.391 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.391 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.650 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:10.650 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.650 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.910 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:10.910 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.910 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.911 [2024-11-02 23:54:04.786651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 71fac00d-b812-4cea-b666-52eeec732cdc '!=' 71fac00d-b812-4cea-b666-52eeec732cdc ']' 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94337 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 94337 ']' 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 94337 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94337 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:10.911 killing process with pid 94337 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94337' 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 94337 00:14:10.911 [2024-11-02 23:54:04.855436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.911 [2024-11-02 23:54:04.855532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.911 [2024-11-02 23:54:04.855612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.911 [2024-11-02 23:54:04.855622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:10.911 23:54:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 94337 00:14:10.911 [2024-11-02 23:54:04.899184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.170 23:54:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:11.170 00:14:11.170 real 0m7.159s 00:14:11.170 user 0m12.080s 00:14:11.170 sys 0m1.552s 00:14:11.170 23:54:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:11.170 23:54:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.170 ************************************ 00:14:11.170 END TEST raid5f_superblock_test 00:14:11.170 ************************************ 00:14:11.170 23:54:05 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:11.170 23:54:05 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:11.170 23:54:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:11.170 23:54:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:11.170 23:54:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:11.170 ************************************ 00:14:11.170 START TEST raid5f_rebuild_test 00:14:11.170 ************************************ 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:11.170 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94806 00:14:11.171 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:11.171 23:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94806 00:14:11.171 23:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 94806 ']' 00:14:11.171 23:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.171 23:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.171 23:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.171 23:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.171 23:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.430 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:11.430 Zero copy mechanism will not be used. 00:14:11.430 [2024-11-02 23:54:05.288147] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:14:11.430 [2024-11-02 23:54:05.288322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94806 ] 00:14:11.430 [2024-11-02 23:54:05.441599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.430 [2024-11-02 23:54:05.467779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.430 [2024-11-02 23:54:05.510612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.430 [2024-11-02 23:54:05.510722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.370 BaseBdev1_malloc 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.370 [2024-11-02 23:54:06.120989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.370 [2024-11-02 23:54:06.121057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.370 [2024-11-02 23:54:06.121084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:12.370 [2024-11-02 23:54:06.121105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.370 [2024-11-02 23:54:06.123196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.370 [2024-11-02 23:54:06.123231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.370 BaseBdev1 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.370 BaseBdev2_malloc 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:12.370 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 [2024-11-02 23:54:06.145582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:12.371 [2024-11-02 23:54:06.145633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.371 [2024-11-02 23:54:06.145651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:12.371 [2024-11-02 23:54:06.145659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.371 [2024-11-02 23:54:06.147700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.371 [2024-11-02 23:54:06.147754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:12.371 BaseBdev2 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 BaseBdev3_malloc 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 [2024-11-02 23:54:06.174095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:12.371 [2024-11-02 23:54:06.174148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.371 [2024-11-02 23:54:06.174170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:12.371 [2024-11-02 23:54:06.174179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.371 [2024-11-02 23:54:06.176179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.371 [2024-11-02 23:54:06.176290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:12.371 BaseBdev3 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 BaseBdev4_malloc 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 [2024-11-02 23:54:06.211814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:12.371 [2024-11-02 23:54:06.211940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.371 [2024-11-02 23:54:06.211971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:12.371 [2024-11-02 23:54:06.211981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.371 [2024-11-02 23:54:06.214183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.371 [2024-11-02 23:54:06.214216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:12.371 BaseBdev4 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 spare_malloc 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 spare_delay 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 [2024-11-02 23:54:06.252310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.371 [2024-11-02 23:54:06.252358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.371 [2024-11-02 23:54:06.252377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:12.371 [2024-11-02 23:54:06.252385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.371 [2024-11-02 23:54:06.254388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.371 [2024-11-02 23:54:06.254502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.371 spare 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 [2024-11-02 23:54:06.264358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.371 [2024-11-02 23:54:06.266154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.371 [2024-11-02 23:54:06.266250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.371 [2024-11-02 23:54:06.266322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.371 [2024-11-02 23:54:06.266443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:12.371 [2024-11-02 23:54:06.266493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:12.371 [2024-11-02 23:54:06.266788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:12.371 [2024-11-02 23:54:06.267258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:12.371 [2024-11-02 23:54:06.267305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:12.371 [2024-11-02 23:54:06.267457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.371 "name": "raid_bdev1", 00:14:12.371 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:12.371 "strip_size_kb": 64, 00:14:12.371 "state": "online", 00:14:12.371 "raid_level": "raid5f", 00:14:12.371 "superblock": false, 00:14:12.371 "num_base_bdevs": 4, 00:14:12.371 "num_base_bdevs_discovered": 4, 00:14:12.371 "num_base_bdevs_operational": 4, 00:14:12.371 "base_bdevs_list": [ 00:14:12.371 { 00:14:12.371 "name": "BaseBdev1", 00:14:12.371 "uuid": "5b67c43a-0a2f-5c54-892a-276e25e09df5", 00:14:12.371 "is_configured": true, 00:14:12.371 "data_offset": 0, 00:14:12.371 "data_size": 65536 00:14:12.371 }, 00:14:12.371 { 00:14:12.371 "name": "BaseBdev2", 00:14:12.371 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:12.371 "is_configured": true, 00:14:12.371 "data_offset": 0, 00:14:12.371 "data_size": 65536 00:14:12.371 }, 00:14:12.371 { 00:14:12.371 "name": "BaseBdev3", 00:14:12.371 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:12.371 "is_configured": true, 00:14:12.371 "data_offset": 0, 00:14:12.371 "data_size": 65536 00:14:12.371 }, 00:14:12.371 { 00:14:12.371 "name": "BaseBdev4", 00:14:12.371 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:12.371 "is_configured": true, 00:14:12.371 "data_offset": 0, 00:14:12.371 "data_size": 65536 00:14:12.371 } 00:14:12.371 ] 00:14:12.371 }' 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.371 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:12.941 [2024-11-02 23:54:06.764561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.941 23:54:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:13.201 [2024-11-02 23:54:07.035968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:13.201 /dev/nbd0 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.201 1+0 records in 00:14:13.201 1+0 records out 00:14:13.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179365 s, 22.8 MB/s 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:13.201 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:13.771 512+0 records in 00:14:13.771 512+0 records out 00:14:13.771 100663296 bytes (101 MB, 96 MiB) copied, 0.521014 s, 193 MB/s 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:13.771 [2024-11-02 23:54:07.835281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:13.771 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.772 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.032 [2024-11-02 23:54:07.867294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.032 "name": "raid_bdev1", 00:14:14.032 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:14.032 "strip_size_kb": 64, 00:14:14.032 "state": "online", 00:14:14.032 "raid_level": "raid5f", 00:14:14.032 "superblock": false, 00:14:14.032 "num_base_bdevs": 4, 00:14:14.032 "num_base_bdevs_discovered": 3, 00:14:14.032 "num_base_bdevs_operational": 3, 00:14:14.032 "base_bdevs_list": [ 00:14:14.032 { 00:14:14.032 "name": null, 00:14:14.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.032 "is_configured": false, 00:14:14.032 "data_offset": 0, 00:14:14.032 "data_size": 65536 00:14:14.032 }, 00:14:14.032 { 00:14:14.032 "name": "BaseBdev2", 00:14:14.032 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:14.032 "is_configured": true, 00:14:14.032 "data_offset": 0, 00:14:14.032 "data_size": 65536 00:14:14.032 }, 00:14:14.032 { 00:14:14.032 "name": "BaseBdev3", 00:14:14.032 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:14.032 "is_configured": true, 00:14:14.032 "data_offset": 0, 00:14:14.032 "data_size": 65536 00:14:14.032 }, 00:14:14.032 { 00:14:14.032 "name": "BaseBdev4", 00:14:14.032 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:14.032 "is_configured": true, 00:14:14.032 "data_offset": 0, 00:14:14.032 "data_size": 65536 00:14:14.032 } 00:14:14.032 ] 00:14:14.032 }' 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.032 23:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.292 23:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.292 23:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.292 23:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.292 [2024-11-02 23:54:08.286619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.292 [2024-11-02 23:54:08.290858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:14:14.292 23:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.292 23:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:14.292 [2024-11-02 23:54:08.293071] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.284 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.284 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.284 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.284 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.284 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.284 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.284 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.284 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.284 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.284 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.544 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.544 "name": "raid_bdev1", 00:14:15.544 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:15.544 "strip_size_kb": 64, 00:14:15.544 "state": "online", 00:14:15.544 "raid_level": "raid5f", 00:14:15.544 "superblock": false, 00:14:15.544 "num_base_bdevs": 4, 00:14:15.544 "num_base_bdevs_discovered": 4, 00:14:15.544 "num_base_bdevs_operational": 4, 00:14:15.544 "process": { 00:14:15.544 "type": "rebuild", 00:14:15.544 "target": "spare", 00:14:15.544 "progress": { 00:14:15.544 "blocks": 19200, 00:14:15.544 "percent": 9 00:14:15.544 } 00:14:15.544 }, 00:14:15.544 "base_bdevs_list": [ 00:14:15.544 { 00:14:15.544 "name": "spare", 00:14:15.544 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:15.544 "is_configured": true, 00:14:15.544 "data_offset": 0, 00:14:15.544 "data_size": 65536 00:14:15.544 }, 00:14:15.544 { 00:14:15.544 "name": "BaseBdev2", 00:14:15.544 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:15.544 "is_configured": true, 00:14:15.544 "data_offset": 0, 00:14:15.544 "data_size": 65536 00:14:15.544 }, 00:14:15.544 { 00:14:15.544 "name": "BaseBdev3", 00:14:15.544 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:15.544 "is_configured": true, 00:14:15.544 "data_offset": 0, 00:14:15.544 "data_size": 65536 00:14:15.544 }, 00:14:15.544 { 00:14:15.544 "name": "BaseBdev4", 00:14:15.544 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:15.544 "is_configured": true, 00:14:15.544 "data_offset": 0, 00:14:15.544 "data_size": 65536 00:14:15.544 } 00:14:15.544 ] 00:14:15.544 }' 00:14:15.544 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.545 [2024-11-02 23:54:09.453506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.545 [2024-11-02 23:54:09.498512] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:15.545 [2024-11-02 23:54:09.498566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.545 [2024-11-02 23:54:09.498583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.545 [2024-11-02 23:54:09.498597] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.545 "name": "raid_bdev1", 00:14:15.545 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:15.545 "strip_size_kb": 64, 00:14:15.545 "state": "online", 00:14:15.545 "raid_level": "raid5f", 00:14:15.545 "superblock": false, 00:14:15.545 "num_base_bdevs": 4, 00:14:15.545 "num_base_bdevs_discovered": 3, 00:14:15.545 "num_base_bdevs_operational": 3, 00:14:15.545 "base_bdevs_list": [ 00:14:15.545 { 00:14:15.545 "name": null, 00:14:15.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.545 "is_configured": false, 00:14:15.545 "data_offset": 0, 00:14:15.545 "data_size": 65536 00:14:15.545 }, 00:14:15.545 { 00:14:15.545 "name": "BaseBdev2", 00:14:15.545 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:15.545 "is_configured": true, 00:14:15.545 "data_offset": 0, 00:14:15.545 "data_size": 65536 00:14:15.545 }, 00:14:15.545 { 00:14:15.545 "name": "BaseBdev3", 00:14:15.545 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:15.545 "is_configured": true, 00:14:15.545 "data_offset": 0, 00:14:15.545 "data_size": 65536 00:14:15.545 }, 00:14:15.545 { 00:14:15.545 "name": "BaseBdev4", 00:14:15.545 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:15.545 "is_configured": true, 00:14:15.545 "data_offset": 0, 00:14:15.545 "data_size": 65536 00:14:15.545 } 00:14:15.545 ] 00:14:15.545 }' 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.545 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.193 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.193 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.193 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.193 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.193 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.193 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.193 23:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.193 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.193 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.193 23:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.193 23:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.193 "name": "raid_bdev1", 00:14:16.193 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:16.194 "strip_size_kb": 64, 00:14:16.194 "state": "online", 00:14:16.194 "raid_level": "raid5f", 00:14:16.194 "superblock": false, 00:14:16.194 "num_base_bdevs": 4, 00:14:16.194 "num_base_bdevs_discovered": 3, 00:14:16.194 "num_base_bdevs_operational": 3, 00:14:16.194 "base_bdevs_list": [ 00:14:16.194 { 00:14:16.194 "name": null, 00:14:16.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.194 "is_configured": false, 00:14:16.194 "data_offset": 0, 00:14:16.194 "data_size": 65536 00:14:16.194 }, 00:14:16.194 { 00:14:16.194 "name": "BaseBdev2", 00:14:16.194 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:16.194 "is_configured": true, 00:14:16.194 "data_offset": 0, 00:14:16.194 "data_size": 65536 00:14:16.194 }, 00:14:16.194 { 00:14:16.194 "name": "BaseBdev3", 00:14:16.194 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:16.194 "is_configured": true, 00:14:16.194 "data_offset": 0, 00:14:16.194 "data_size": 65536 00:14:16.194 }, 00:14:16.194 { 00:14:16.194 "name": "BaseBdev4", 00:14:16.194 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:16.194 "is_configured": true, 00:14:16.194 "data_offset": 0, 00:14:16.194 "data_size": 65536 00:14:16.194 } 00:14:16.194 ] 00:14:16.194 }' 00:14:16.194 23:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.194 23:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.194 23:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.194 23:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.194 23:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.194 23:54:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.194 23:54:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.194 [2024-11-02 23:54:10.083309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.194 [2024-11-02 23:54:10.087441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:14:16.194 23:54:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.194 23:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:16.194 [2024-11-02 23:54:10.089574] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.175 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.175 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.176 "name": "raid_bdev1", 00:14:17.176 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:17.176 "strip_size_kb": 64, 00:14:17.176 "state": "online", 00:14:17.176 "raid_level": "raid5f", 00:14:17.176 "superblock": false, 00:14:17.176 "num_base_bdevs": 4, 00:14:17.176 "num_base_bdevs_discovered": 4, 00:14:17.176 "num_base_bdevs_operational": 4, 00:14:17.176 "process": { 00:14:17.176 "type": "rebuild", 00:14:17.176 "target": "spare", 00:14:17.176 "progress": { 00:14:17.176 "blocks": 19200, 00:14:17.176 "percent": 9 00:14:17.176 } 00:14:17.176 }, 00:14:17.176 "base_bdevs_list": [ 00:14:17.176 { 00:14:17.176 "name": "spare", 00:14:17.176 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:17.176 "is_configured": true, 00:14:17.176 "data_offset": 0, 00:14:17.176 "data_size": 65536 00:14:17.176 }, 00:14:17.176 { 00:14:17.176 "name": "BaseBdev2", 00:14:17.176 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:17.176 "is_configured": true, 00:14:17.176 "data_offset": 0, 00:14:17.176 "data_size": 65536 00:14:17.176 }, 00:14:17.176 { 00:14:17.176 "name": "BaseBdev3", 00:14:17.176 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:17.176 "is_configured": true, 00:14:17.176 "data_offset": 0, 00:14:17.176 "data_size": 65536 00:14:17.176 }, 00:14:17.176 { 00:14:17.176 "name": "BaseBdev4", 00:14:17.176 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:17.176 "is_configured": true, 00:14:17.176 "data_offset": 0, 00:14:17.176 "data_size": 65536 00:14:17.176 } 00:14:17.176 ] 00:14:17.176 }' 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=510 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.176 23:54:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.177 23:54:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.438 23:54:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.438 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.438 "name": "raid_bdev1", 00:14:17.438 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:17.438 "strip_size_kb": 64, 00:14:17.438 "state": "online", 00:14:17.438 "raid_level": "raid5f", 00:14:17.438 "superblock": false, 00:14:17.438 "num_base_bdevs": 4, 00:14:17.438 "num_base_bdevs_discovered": 4, 00:14:17.438 "num_base_bdevs_operational": 4, 00:14:17.438 "process": { 00:14:17.438 "type": "rebuild", 00:14:17.438 "target": "spare", 00:14:17.438 "progress": { 00:14:17.438 "blocks": 21120, 00:14:17.438 "percent": 10 00:14:17.438 } 00:14:17.438 }, 00:14:17.438 "base_bdevs_list": [ 00:14:17.438 { 00:14:17.438 "name": "spare", 00:14:17.438 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:17.438 "is_configured": true, 00:14:17.438 "data_offset": 0, 00:14:17.438 "data_size": 65536 00:14:17.438 }, 00:14:17.438 { 00:14:17.438 "name": "BaseBdev2", 00:14:17.438 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:17.438 "is_configured": true, 00:14:17.438 "data_offset": 0, 00:14:17.438 "data_size": 65536 00:14:17.438 }, 00:14:17.438 { 00:14:17.438 "name": "BaseBdev3", 00:14:17.438 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:17.438 "is_configured": true, 00:14:17.438 "data_offset": 0, 00:14:17.438 "data_size": 65536 00:14:17.438 }, 00:14:17.438 { 00:14:17.438 "name": "BaseBdev4", 00:14:17.438 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:17.438 "is_configured": true, 00:14:17.438 "data_offset": 0, 00:14:17.438 "data_size": 65536 00:14:17.438 } 00:14:17.438 ] 00:14:17.438 }' 00:14:17.438 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.438 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.438 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.438 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.438 23:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.378 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.378 "name": "raid_bdev1", 00:14:18.379 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:18.379 "strip_size_kb": 64, 00:14:18.379 "state": "online", 00:14:18.379 "raid_level": "raid5f", 00:14:18.379 "superblock": false, 00:14:18.379 "num_base_bdevs": 4, 00:14:18.379 "num_base_bdevs_discovered": 4, 00:14:18.379 "num_base_bdevs_operational": 4, 00:14:18.379 "process": { 00:14:18.379 "type": "rebuild", 00:14:18.379 "target": "spare", 00:14:18.379 "progress": { 00:14:18.379 "blocks": 44160, 00:14:18.379 "percent": 22 00:14:18.379 } 00:14:18.379 }, 00:14:18.379 "base_bdevs_list": [ 00:14:18.379 { 00:14:18.379 "name": "spare", 00:14:18.379 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:18.379 "is_configured": true, 00:14:18.379 "data_offset": 0, 00:14:18.379 "data_size": 65536 00:14:18.379 }, 00:14:18.379 { 00:14:18.379 "name": "BaseBdev2", 00:14:18.379 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:18.379 "is_configured": true, 00:14:18.379 "data_offset": 0, 00:14:18.379 "data_size": 65536 00:14:18.379 }, 00:14:18.379 { 00:14:18.379 "name": "BaseBdev3", 00:14:18.379 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:18.379 "is_configured": true, 00:14:18.379 "data_offset": 0, 00:14:18.379 "data_size": 65536 00:14:18.379 }, 00:14:18.379 { 00:14:18.379 "name": "BaseBdev4", 00:14:18.379 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:18.379 "is_configured": true, 00:14:18.379 "data_offset": 0, 00:14:18.379 "data_size": 65536 00:14:18.379 } 00:14:18.379 ] 00:14:18.379 }' 00:14:18.379 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.639 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.639 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.639 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.639 23:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.576 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.576 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.576 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.576 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.576 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.576 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.577 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.577 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.577 23:54:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.577 23:54:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.577 23:54:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.577 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.577 "name": "raid_bdev1", 00:14:19.577 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:19.577 "strip_size_kb": 64, 00:14:19.577 "state": "online", 00:14:19.577 "raid_level": "raid5f", 00:14:19.577 "superblock": false, 00:14:19.577 "num_base_bdevs": 4, 00:14:19.577 "num_base_bdevs_discovered": 4, 00:14:19.577 "num_base_bdevs_operational": 4, 00:14:19.577 "process": { 00:14:19.577 "type": "rebuild", 00:14:19.577 "target": "spare", 00:14:19.577 "progress": { 00:14:19.577 "blocks": 65280, 00:14:19.577 "percent": 33 00:14:19.577 } 00:14:19.577 }, 00:14:19.577 "base_bdevs_list": [ 00:14:19.577 { 00:14:19.577 "name": "spare", 00:14:19.577 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:19.577 "is_configured": true, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 }, 00:14:19.577 { 00:14:19.577 "name": "BaseBdev2", 00:14:19.577 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:19.577 "is_configured": true, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 }, 00:14:19.577 { 00:14:19.577 "name": "BaseBdev3", 00:14:19.577 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:19.577 "is_configured": true, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 }, 00:14:19.577 { 00:14:19.577 "name": "BaseBdev4", 00:14:19.577 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:19.577 "is_configured": true, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 } 00:14:19.577 ] 00:14:19.577 }' 00:14:19.577 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.577 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.577 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.835 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.835 23:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.782 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.782 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.782 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.782 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.783 "name": "raid_bdev1", 00:14:20.783 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:20.783 "strip_size_kb": 64, 00:14:20.783 "state": "online", 00:14:20.783 "raid_level": "raid5f", 00:14:20.783 "superblock": false, 00:14:20.783 "num_base_bdevs": 4, 00:14:20.783 "num_base_bdevs_discovered": 4, 00:14:20.783 "num_base_bdevs_operational": 4, 00:14:20.783 "process": { 00:14:20.783 "type": "rebuild", 00:14:20.783 "target": "spare", 00:14:20.783 "progress": { 00:14:20.783 "blocks": 88320, 00:14:20.783 "percent": 44 00:14:20.783 } 00:14:20.783 }, 00:14:20.783 "base_bdevs_list": [ 00:14:20.783 { 00:14:20.783 "name": "spare", 00:14:20.783 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:20.783 "is_configured": true, 00:14:20.783 "data_offset": 0, 00:14:20.783 "data_size": 65536 00:14:20.783 }, 00:14:20.783 { 00:14:20.783 "name": "BaseBdev2", 00:14:20.783 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:20.783 "is_configured": true, 00:14:20.783 "data_offset": 0, 00:14:20.783 "data_size": 65536 00:14:20.783 }, 00:14:20.783 { 00:14:20.783 "name": "BaseBdev3", 00:14:20.783 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:20.783 "is_configured": true, 00:14:20.783 "data_offset": 0, 00:14:20.783 "data_size": 65536 00:14:20.783 }, 00:14:20.783 { 00:14:20.783 "name": "BaseBdev4", 00:14:20.783 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:20.783 "is_configured": true, 00:14:20.783 "data_offset": 0, 00:14:20.783 "data_size": 65536 00:14:20.783 } 00:14:20.783 ] 00:14:20.783 }' 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.783 23:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.163 "name": "raid_bdev1", 00:14:22.163 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:22.163 "strip_size_kb": 64, 00:14:22.163 "state": "online", 00:14:22.163 "raid_level": "raid5f", 00:14:22.163 "superblock": false, 00:14:22.163 "num_base_bdevs": 4, 00:14:22.163 "num_base_bdevs_discovered": 4, 00:14:22.163 "num_base_bdevs_operational": 4, 00:14:22.163 "process": { 00:14:22.163 "type": "rebuild", 00:14:22.163 "target": "spare", 00:14:22.163 "progress": { 00:14:22.163 "blocks": 109440, 00:14:22.163 "percent": 55 00:14:22.163 } 00:14:22.163 }, 00:14:22.163 "base_bdevs_list": [ 00:14:22.163 { 00:14:22.163 "name": "spare", 00:14:22.163 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:22.163 "is_configured": true, 00:14:22.163 "data_offset": 0, 00:14:22.163 "data_size": 65536 00:14:22.163 }, 00:14:22.163 { 00:14:22.163 "name": "BaseBdev2", 00:14:22.163 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:22.163 "is_configured": true, 00:14:22.163 "data_offset": 0, 00:14:22.163 "data_size": 65536 00:14:22.163 }, 00:14:22.163 { 00:14:22.163 "name": "BaseBdev3", 00:14:22.163 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:22.163 "is_configured": true, 00:14:22.163 "data_offset": 0, 00:14:22.163 "data_size": 65536 00:14:22.163 }, 00:14:22.163 { 00:14:22.163 "name": "BaseBdev4", 00:14:22.163 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:22.163 "is_configured": true, 00:14:22.163 "data_offset": 0, 00:14:22.163 "data_size": 65536 00:14:22.163 } 00:14:22.163 ] 00:14:22.163 }' 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.163 23:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.104 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.104 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.104 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.104 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.104 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.104 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.104 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.105 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.105 23:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.105 23:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.105 23:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.105 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.105 "name": "raid_bdev1", 00:14:23.105 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:23.105 "strip_size_kb": 64, 00:14:23.105 "state": "online", 00:14:23.105 "raid_level": "raid5f", 00:14:23.105 "superblock": false, 00:14:23.105 "num_base_bdevs": 4, 00:14:23.105 "num_base_bdevs_discovered": 4, 00:14:23.105 "num_base_bdevs_operational": 4, 00:14:23.105 "process": { 00:14:23.105 "type": "rebuild", 00:14:23.105 "target": "spare", 00:14:23.105 "progress": { 00:14:23.105 "blocks": 132480, 00:14:23.105 "percent": 67 00:14:23.105 } 00:14:23.105 }, 00:14:23.105 "base_bdevs_list": [ 00:14:23.105 { 00:14:23.105 "name": "spare", 00:14:23.105 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:23.105 "is_configured": true, 00:14:23.105 "data_offset": 0, 00:14:23.105 "data_size": 65536 00:14:23.105 }, 00:14:23.105 { 00:14:23.105 "name": "BaseBdev2", 00:14:23.105 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:23.105 "is_configured": true, 00:14:23.105 "data_offset": 0, 00:14:23.105 "data_size": 65536 00:14:23.105 }, 00:14:23.105 { 00:14:23.105 "name": "BaseBdev3", 00:14:23.105 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:23.105 "is_configured": true, 00:14:23.105 "data_offset": 0, 00:14:23.105 "data_size": 65536 00:14:23.105 }, 00:14:23.105 { 00:14:23.105 "name": "BaseBdev4", 00:14:23.105 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:23.105 "is_configured": true, 00:14:23.105 "data_offset": 0, 00:14:23.105 "data_size": 65536 00:14:23.105 } 00:14:23.105 ] 00:14:23.105 }' 00:14:23.105 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.105 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.105 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.105 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.105 23:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.082 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.082 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.082 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.082 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.082 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.082 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.082 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.082 23:54:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.083 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.083 23:54:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.343 23:54:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.343 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.343 "name": "raid_bdev1", 00:14:24.343 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:24.343 "strip_size_kb": 64, 00:14:24.343 "state": "online", 00:14:24.343 "raid_level": "raid5f", 00:14:24.343 "superblock": false, 00:14:24.343 "num_base_bdevs": 4, 00:14:24.343 "num_base_bdevs_discovered": 4, 00:14:24.343 "num_base_bdevs_operational": 4, 00:14:24.343 "process": { 00:14:24.343 "type": "rebuild", 00:14:24.343 "target": "spare", 00:14:24.343 "progress": { 00:14:24.343 "blocks": 153600, 00:14:24.343 "percent": 78 00:14:24.343 } 00:14:24.343 }, 00:14:24.343 "base_bdevs_list": [ 00:14:24.343 { 00:14:24.343 "name": "spare", 00:14:24.343 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:24.343 "is_configured": true, 00:14:24.343 "data_offset": 0, 00:14:24.343 "data_size": 65536 00:14:24.343 }, 00:14:24.343 { 00:14:24.343 "name": "BaseBdev2", 00:14:24.343 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:24.343 "is_configured": true, 00:14:24.343 "data_offset": 0, 00:14:24.343 "data_size": 65536 00:14:24.343 }, 00:14:24.343 { 00:14:24.343 "name": "BaseBdev3", 00:14:24.343 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:24.343 "is_configured": true, 00:14:24.343 "data_offset": 0, 00:14:24.343 "data_size": 65536 00:14:24.343 }, 00:14:24.343 { 00:14:24.343 "name": "BaseBdev4", 00:14:24.343 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:24.343 "is_configured": true, 00:14:24.343 "data_offset": 0, 00:14:24.343 "data_size": 65536 00:14:24.343 } 00:14:24.343 ] 00:14:24.343 }' 00:14:24.343 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.343 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.343 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.343 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.343 23:54:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.283 "name": "raid_bdev1", 00:14:25.283 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:25.283 "strip_size_kb": 64, 00:14:25.283 "state": "online", 00:14:25.283 "raid_level": "raid5f", 00:14:25.283 "superblock": false, 00:14:25.283 "num_base_bdevs": 4, 00:14:25.283 "num_base_bdevs_discovered": 4, 00:14:25.283 "num_base_bdevs_operational": 4, 00:14:25.283 "process": { 00:14:25.283 "type": "rebuild", 00:14:25.283 "target": "spare", 00:14:25.283 "progress": { 00:14:25.283 "blocks": 174720, 00:14:25.283 "percent": 88 00:14:25.283 } 00:14:25.283 }, 00:14:25.283 "base_bdevs_list": [ 00:14:25.283 { 00:14:25.283 "name": "spare", 00:14:25.283 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:25.283 "is_configured": true, 00:14:25.283 "data_offset": 0, 00:14:25.283 "data_size": 65536 00:14:25.283 }, 00:14:25.283 { 00:14:25.283 "name": "BaseBdev2", 00:14:25.283 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:25.283 "is_configured": true, 00:14:25.283 "data_offset": 0, 00:14:25.283 "data_size": 65536 00:14:25.283 }, 00:14:25.283 { 00:14:25.283 "name": "BaseBdev3", 00:14:25.283 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:25.283 "is_configured": true, 00:14:25.283 "data_offset": 0, 00:14:25.283 "data_size": 65536 00:14:25.283 }, 00:14:25.283 { 00:14:25.283 "name": "BaseBdev4", 00:14:25.283 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:25.283 "is_configured": true, 00:14:25.283 "data_offset": 0, 00:14:25.283 "data_size": 65536 00:14:25.283 } 00:14:25.283 ] 00:14:25.283 }' 00:14:25.283 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.554 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.554 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.554 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.554 23:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:26.494 [2024-11-02 23:54:20.428278] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:26.495 [2024-11-02 23:54:20.428345] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:26.495 [2024-11-02 23:54:20.428380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.495 "name": "raid_bdev1", 00:14:26.495 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:26.495 "strip_size_kb": 64, 00:14:26.495 "state": "online", 00:14:26.495 "raid_level": "raid5f", 00:14:26.495 "superblock": false, 00:14:26.495 "num_base_bdevs": 4, 00:14:26.495 "num_base_bdevs_discovered": 4, 00:14:26.495 "num_base_bdevs_operational": 4, 00:14:26.495 "base_bdevs_list": [ 00:14:26.495 { 00:14:26.495 "name": "spare", 00:14:26.495 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:26.495 "is_configured": true, 00:14:26.495 "data_offset": 0, 00:14:26.495 "data_size": 65536 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "name": "BaseBdev2", 00:14:26.495 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:26.495 "is_configured": true, 00:14:26.495 "data_offset": 0, 00:14:26.495 "data_size": 65536 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "name": "BaseBdev3", 00:14:26.495 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:26.495 "is_configured": true, 00:14:26.495 "data_offset": 0, 00:14:26.495 "data_size": 65536 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "name": "BaseBdev4", 00:14:26.495 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:26.495 "is_configured": true, 00:14:26.495 "data_offset": 0, 00:14:26.495 "data_size": 65536 00:14:26.495 } 00:14:26.495 ] 00:14:26.495 }' 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:26.495 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.755 "name": "raid_bdev1", 00:14:26.755 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:26.755 "strip_size_kb": 64, 00:14:26.755 "state": "online", 00:14:26.755 "raid_level": "raid5f", 00:14:26.755 "superblock": false, 00:14:26.755 "num_base_bdevs": 4, 00:14:26.755 "num_base_bdevs_discovered": 4, 00:14:26.755 "num_base_bdevs_operational": 4, 00:14:26.755 "base_bdevs_list": [ 00:14:26.755 { 00:14:26.755 "name": "spare", 00:14:26.755 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:26.755 "is_configured": true, 00:14:26.755 "data_offset": 0, 00:14:26.755 "data_size": 65536 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "name": "BaseBdev2", 00:14:26.755 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:26.755 "is_configured": true, 00:14:26.755 "data_offset": 0, 00:14:26.755 "data_size": 65536 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "name": "BaseBdev3", 00:14:26.755 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:26.755 "is_configured": true, 00:14:26.755 "data_offset": 0, 00:14:26.755 "data_size": 65536 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "name": "BaseBdev4", 00:14:26.755 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:26.755 "is_configured": true, 00:14:26.755 "data_offset": 0, 00:14:26.755 "data_size": 65536 00:14:26.755 } 00:14:26.755 ] 00:14:26.755 }' 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.755 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.755 "name": "raid_bdev1", 00:14:26.755 "uuid": "87890999-9c20-4859-8803-0e53ebf3fc27", 00:14:26.755 "strip_size_kb": 64, 00:14:26.755 "state": "online", 00:14:26.755 "raid_level": "raid5f", 00:14:26.755 "superblock": false, 00:14:26.755 "num_base_bdevs": 4, 00:14:26.755 "num_base_bdevs_discovered": 4, 00:14:26.755 "num_base_bdevs_operational": 4, 00:14:26.755 "base_bdevs_list": [ 00:14:26.755 { 00:14:26.755 "name": "spare", 00:14:26.755 "uuid": "3f2e2f93-f9f1-5222-9cc8-81f27cf06f7d", 00:14:26.755 "is_configured": true, 00:14:26.755 "data_offset": 0, 00:14:26.755 "data_size": 65536 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "name": "BaseBdev2", 00:14:26.755 "uuid": "88f991f0-da98-5647-9995-bd9297bf5683", 00:14:26.755 "is_configured": true, 00:14:26.755 "data_offset": 0, 00:14:26.755 "data_size": 65536 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "name": "BaseBdev3", 00:14:26.755 "uuid": "dbf1cac9-10f4-5cd7-8f2c-c7d952bb8ea2", 00:14:26.755 "is_configured": true, 00:14:26.755 "data_offset": 0, 00:14:26.755 "data_size": 65536 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "name": "BaseBdev4", 00:14:26.755 "uuid": "5b0a342d-6e14-5b01-b985-42803fdfad00", 00:14:26.756 "is_configured": true, 00:14:26.756 "data_offset": 0, 00:14:26.756 "data_size": 65536 00:14:26.756 } 00:14:26.756 ] 00:14:26.756 }' 00:14:26.756 23:54:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.756 23:54:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.338 [2024-11-02 23:54:21.216493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.338 [2024-11-02 23:54:21.216529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.338 [2024-11-02 23:54:21.216607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.338 [2024-11-02 23:54:21.216705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.338 [2024-11-02 23:54:21.216732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:27.338 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:27.598 /dev/nbd0 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.598 1+0 records in 00:14:27.598 1+0 records out 00:14:27.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384573 s, 10.7 MB/s 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:27.598 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:27.859 /dev/nbd1 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.859 1+0 records in 00:14:27.859 1+0 records out 00:14:27.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463408 s, 8.8 MB/s 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:27.859 23:54:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:28.120 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:28.120 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:28.120 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:28.120 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.120 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.120 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:28.120 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:28.120 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.120 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.120 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94806 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 94806 ']' 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 94806 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94806 00:14:28.380 killing process with pid 94806 00:14:28.380 Received shutdown signal, test time was about 60.000000 seconds 00:14:28.380 00:14:28.380 Latency(us) 00:14:28.380 [2024-11-02T23:54:22.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.380 [2024-11-02T23:54:22.475Z] =================================================================================================================== 00:14:28.380 [2024-11-02T23:54:22.475Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94806' 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 94806 00:14:28.380 [2024-11-02 23:54:22.342871] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.380 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 94806 00:14:28.381 [2024-11-02 23:54:22.393412] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:28.641 00:14:28.641 real 0m17.394s 00:14:28.641 user 0m21.176s 00:14:28.641 sys 0m2.415s 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.641 ************************************ 00:14:28.641 END TEST raid5f_rebuild_test 00:14:28.641 ************************************ 00:14:28.641 23:54:22 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:28.641 23:54:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:28.641 23:54:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:28.641 23:54:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.641 ************************************ 00:14:28.641 START TEST raid5f_rebuild_test_sb 00:14:28.641 ************************************ 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:28.641 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95295 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95295 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 95295 ']' 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:28.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:28.642 23:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.902 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:28.902 Zero copy mechanism will not be used. 00:14:28.902 [2024-11-02 23:54:22.767207] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:14:28.902 [2024-11-02 23:54:22.767332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95295 ] 00:14:28.902 [2024-11-02 23:54:22.923735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.902 [2024-11-02 23:54:22.949479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.902 [2024-11-02 23:54:22.991892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.902 [2024-11-02 23:54:22.991934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.842 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 BaseBdev1_malloc 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 [2024-11-02 23:54:23.601767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:29.843 [2024-11-02 23:54:23.601836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.843 [2024-11-02 23:54:23.601876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:29.843 [2024-11-02 23:54:23.601889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.843 [2024-11-02 23:54:23.603900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.843 [2024-11-02 23:54:23.603934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.843 BaseBdev1 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 BaseBdev2_malloc 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 [2024-11-02 23:54:23.630135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:29.843 [2024-11-02 23:54:23.630182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.843 [2024-11-02 23:54:23.630216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.843 [2024-11-02 23:54:23.630225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.843 [2024-11-02 23:54:23.632202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.843 [2024-11-02 23:54:23.632250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.843 BaseBdev2 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 BaseBdev3_malloc 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 [2024-11-02 23:54:23.658569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:29.843 [2024-11-02 23:54:23.658616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.843 [2024-11-02 23:54:23.658654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.843 [2024-11-02 23:54:23.658663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.843 [2024-11-02 23:54:23.660615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.843 [2024-11-02 23:54:23.660647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:29.843 BaseBdev3 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 BaseBdev4_malloc 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 [2024-11-02 23:54:23.702520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:29.843 [2024-11-02 23:54:23.702603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.843 [2024-11-02 23:54:23.702642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:29.843 [2024-11-02 23:54:23.702660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.843 [2024-11-02 23:54:23.706546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.843 [2024-11-02 23:54:23.706603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:29.843 BaseBdev4 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 spare_malloc 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 spare_delay 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.843 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.843 [2024-11-02 23:54:23.743829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.843 [2024-11-02 23:54:23.743872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.843 [2024-11-02 23:54:23.743915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:29.843 [2024-11-02 23:54:23.743923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.843 [2024-11-02 23:54:23.745912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.843 [2024-11-02 23:54:23.745943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.843 spare 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.844 [2024-11-02 23:54:23.755894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.844 [2024-11-02 23:54:23.757646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.844 [2024-11-02 23:54:23.757707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.844 [2024-11-02 23:54:23.757788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:29.844 [2024-11-02 23:54:23.757951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:29.844 [2024-11-02 23:54:23.757980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:29.844 [2024-11-02 23:54:23.758213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:29.844 [2024-11-02 23:54:23.758647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:29.844 [2024-11-02 23:54:23.758669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:29.844 [2024-11-02 23:54:23.758797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.844 "name": "raid_bdev1", 00:14:29.844 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:29.844 "strip_size_kb": 64, 00:14:29.844 "state": "online", 00:14:29.844 "raid_level": "raid5f", 00:14:29.844 "superblock": true, 00:14:29.844 "num_base_bdevs": 4, 00:14:29.844 "num_base_bdevs_discovered": 4, 00:14:29.844 "num_base_bdevs_operational": 4, 00:14:29.844 "base_bdevs_list": [ 00:14:29.844 { 00:14:29.844 "name": "BaseBdev1", 00:14:29.844 "uuid": "962c02c0-1ede-50da-842b-0da9d0918408", 00:14:29.844 "is_configured": true, 00:14:29.844 "data_offset": 2048, 00:14:29.844 "data_size": 63488 00:14:29.844 }, 00:14:29.844 { 00:14:29.844 "name": "BaseBdev2", 00:14:29.844 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:29.844 "is_configured": true, 00:14:29.844 "data_offset": 2048, 00:14:29.844 "data_size": 63488 00:14:29.844 }, 00:14:29.844 { 00:14:29.844 "name": "BaseBdev3", 00:14:29.844 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:29.844 "is_configured": true, 00:14:29.844 "data_offset": 2048, 00:14:29.844 "data_size": 63488 00:14:29.844 }, 00:14:29.844 { 00:14:29.844 "name": "BaseBdev4", 00:14:29.844 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:29.844 "is_configured": true, 00:14:29.844 "data_offset": 2048, 00:14:29.844 "data_size": 63488 00:14:29.844 } 00:14:29.844 ] 00:14:29.844 }' 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.844 23:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.103 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.103 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:30.103 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.103 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.103 [2024-11-02 23:54:24.183956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.364 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:30.364 [2024-11-02 23:54:24.431371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:30.364 /dev/nbd0 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:30.624 1+0 records in 00:14:30.624 1+0 records out 00:14:30.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038911 s, 10.5 MB/s 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:30.624 23:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:14:31.195 496+0 records in 00:14:31.195 496+0 records out 00:14:31.195 97517568 bytes (98 MB, 93 MiB) copied, 0.583692 s, 167 MB/s 00:14:31.195 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:31.195 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.195 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:31.195 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.195 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:31.195 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.195 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:31.455 [2024-11-02 23:54:25.306826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.455 [2024-11-02 23:54:25.320429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.455 "name": "raid_bdev1", 00:14:31.455 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:31.455 "strip_size_kb": 64, 00:14:31.455 "state": "online", 00:14:31.455 "raid_level": "raid5f", 00:14:31.455 "superblock": true, 00:14:31.455 "num_base_bdevs": 4, 00:14:31.455 "num_base_bdevs_discovered": 3, 00:14:31.455 "num_base_bdevs_operational": 3, 00:14:31.455 "base_bdevs_list": [ 00:14:31.455 { 00:14:31.455 "name": null, 00:14:31.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.455 "is_configured": false, 00:14:31.455 "data_offset": 0, 00:14:31.455 "data_size": 63488 00:14:31.455 }, 00:14:31.455 { 00:14:31.455 "name": "BaseBdev2", 00:14:31.455 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:31.455 "is_configured": true, 00:14:31.455 "data_offset": 2048, 00:14:31.455 "data_size": 63488 00:14:31.455 }, 00:14:31.455 { 00:14:31.455 "name": "BaseBdev3", 00:14:31.455 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:31.455 "is_configured": true, 00:14:31.455 "data_offset": 2048, 00:14:31.455 "data_size": 63488 00:14:31.455 }, 00:14:31.455 { 00:14:31.455 "name": "BaseBdev4", 00:14:31.455 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:31.455 "is_configured": true, 00:14:31.455 "data_offset": 2048, 00:14:31.455 "data_size": 63488 00:14:31.455 } 00:14:31.455 ] 00:14:31.455 }' 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.455 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.716 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:31.716 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.716 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.716 [2024-11-02 23:54:25.775697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.716 [2024-11-02 23:54:25.779917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:14:31.716 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.716 23:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:31.716 [2024-11-02 23:54:25.782122] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.097 "name": "raid_bdev1", 00:14:33.097 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:33.097 "strip_size_kb": 64, 00:14:33.097 "state": "online", 00:14:33.097 "raid_level": "raid5f", 00:14:33.097 "superblock": true, 00:14:33.097 "num_base_bdevs": 4, 00:14:33.097 "num_base_bdevs_discovered": 4, 00:14:33.097 "num_base_bdevs_operational": 4, 00:14:33.097 "process": { 00:14:33.097 "type": "rebuild", 00:14:33.097 "target": "spare", 00:14:33.097 "progress": { 00:14:33.097 "blocks": 19200, 00:14:33.097 "percent": 10 00:14:33.097 } 00:14:33.097 }, 00:14:33.097 "base_bdevs_list": [ 00:14:33.097 { 00:14:33.097 "name": "spare", 00:14:33.097 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:33.097 "is_configured": true, 00:14:33.097 "data_offset": 2048, 00:14:33.097 "data_size": 63488 00:14:33.097 }, 00:14:33.097 { 00:14:33.097 "name": "BaseBdev2", 00:14:33.097 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:33.097 "is_configured": true, 00:14:33.097 "data_offset": 2048, 00:14:33.097 "data_size": 63488 00:14:33.097 }, 00:14:33.097 { 00:14:33.097 "name": "BaseBdev3", 00:14:33.097 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:33.097 "is_configured": true, 00:14:33.097 "data_offset": 2048, 00:14:33.097 "data_size": 63488 00:14:33.097 }, 00:14:33.097 { 00:14:33.097 "name": "BaseBdev4", 00:14:33.097 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:33.097 "is_configured": true, 00:14:33.097 "data_offset": 2048, 00:14:33.097 "data_size": 63488 00:14:33.097 } 00:14:33.097 ] 00:14:33.097 }' 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.097 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.097 [2024-11-02 23:54:26.934892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.098 [2024-11-02 23:54:26.987483] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.098 [2024-11-02 23:54:26.987541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.098 [2024-11-02 23:54:26.987560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.098 [2024-11-02 23:54:26.987570] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.098 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.098 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:33.098 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.098 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.098 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.098 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.098 23:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.098 "name": "raid_bdev1", 00:14:33.098 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:33.098 "strip_size_kb": 64, 00:14:33.098 "state": "online", 00:14:33.098 "raid_level": "raid5f", 00:14:33.098 "superblock": true, 00:14:33.098 "num_base_bdevs": 4, 00:14:33.098 "num_base_bdevs_discovered": 3, 00:14:33.098 "num_base_bdevs_operational": 3, 00:14:33.098 "base_bdevs_list": [ 00:14:33.098 { 00:14:33.098 "name": null, 00:14:33.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.098 "is_configured": false, 00:14:33.098 "data_offset": 0, 00:14:33.098 "data_size": 63488 00:14:33.098 }, 00:14:33.098 { 00:14:33.098 "name": "BaseBdev2", 00:14:33.098 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:33.098 "is_configured": true, 00:14:33.098 "data_offset": 2048, 00:14:33.098 "data_size": 63488 00:14:33.098 }, 00:14:33.098 { 00:14:33.098 "name": "BaseBdev3", 00:14:33.098 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:33.098 "is_configured": true, 00:14:33.098 "data_offset": 2048, 00:14:33.098 "data_size": 63488 00:14:33.098 }, 00:14:33.098 { 00:14:33.098 "name": "BaseBdev4", 00:14:33.098 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:33.098 "is_configured": true, 00:14:33.098 "data_offset": 2048, 00:14:33.098 "data_size": 63488 00:14:33.098 } 00:14:33.098 ] 00:14:33.098 }' 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.098 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.357 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.357 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.357 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.357 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.358 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.358 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.358 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.358 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.358 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.618 "name": "raid_bdev1", 00:14:33.618 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:33.618 "strip_size_kb": 64, 00:14:33.618 "state": "online", 00:14:33.618 "raid_level": "raid5f", 00:14:33.618 "superblock": true, 00:14:33.618 "num_base_bdevs": 4, 00:14:33.618 "num_base_bdevs_discovered": 3, 00:14:33.618 "num_base_bdevs_operational": 3, 00:14:33.618 "base_bdevs_list": [ 00:14:33.618 { 00:14:33.618 "name": null, 00:14:33.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.618 "is_configured": false, 00:14:33.618 "data_offset": 0, 00:14:33.618 "data_size": 63488 00:14:33.618 }, 00:14:33.618 { 00:14:33.618 "name": "BaseBdev2", 00:14:33.618 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:33.618 "is_configured": true, 00:14:33.618 "data_offset": 2048, 00:14:33.618 "data_size": 63488 00:14:33.618 }, 00:14:33.618 { 00:14:33.618 "name": "BaseBdev3", 00:14:33.618 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:33.618 "is_configured": true, 00:14:33.618 "data_offset": 2048, 00:14:33.618 "data_size": 63488 00:14:33.618 }, 00:14:33.618 { 00:14:33.618 "name": "BaseBdev4", 00:14:33.618 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:33.618 "is_configured": true, 00:14:33.618 "data_offset": 2048, 00:14:33.618 "data_size": 63488 00:14:33.618 } 00:14:33.618 ] 00:14:33.618 }' 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.618 [2024-11-02 23:54:27.596542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.618 [2024-11-02 23:54:27.600617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.618 23:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:33.618 [2024-11-02 23:54:27.602729] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.558 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.558 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.558 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.558 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.558 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.558 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.558 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.558 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.558 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.558 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.819 "name": "raid_bdev1", 00:14:34.819 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:34.819 "strip_size_kb": 64, 00:14:34.819 "state": "online", 00:14:34.819 "raid_level": "raid5f", 00:14:34.819 "superblock": true, 00:14:34.819 "num_base_bdevs": 4, 00:14:34.819 "num_base_bdevs_discovered": 4, 00:14:34.819 "num_base_bdevs_operational": 4, 00:14:34.819 "process": { 00:14:34.819 "type": "rebuild", 00:14:34.819 "target": "spare", 00:14:34.819 "progress": { 00:14:34.819 "blocks": 19200, 00:14:34.819 "percent": 10 00:14:34.819 } 00:14:34.819 }, 00:14:34.819 "base_bdevs_list": [ 00:14:34.819 { 00:14:34.819 "name": "spare", 00:14:34.819 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:34.819 "is_configured": true, 00:14:34.819 "data_offset": 2048, 00:14:34.819 "data_size": 63488 00:14:34.819 }, 00:14:34.819 { 00:14:34.819 "name": "BaseBdev2", 00:14:34.819 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:34.819 "is_configured": true, 00:14:34.819 "data_offset": 2048, 00:14:34.819 "data_size": 63488 00:14:34.819 }, 00:14:34.819 { 00:14:34.819 "name": "BaseBdev3", 00:14:34.819 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:34.819 "is_configured": true, 00:14:34.819 "data_offset": 2048, 00:14:34.819 "data_size": 63488 00:14:34.819 }, 00:14:34.819 { 00:14:34.819 "name": "BaseBdev4", 00:14:34.819 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:34.819 "is_configured": true, 00:14:34.819 "data_offset": 2048, 00:14:34.819 "data_size": 63488 00:14:34.819 } 00:14:34.819 ] 00:14:34.819 }' 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:34.819 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=527 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.819 "name": "raid_bdev1", 00:14:34.819 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:34.819 "strip_size_kb": 64, 00:14:34.819 "state": "online", 00:14:34.819 "raid_level": "raid5f", 00:14:34.819 "superblock": true, 00:14:34.819 "num_base_bdevs": 4, 00:14:34.819 "num_base_bdevs_discovered": 4, 00:14:34.819 "num_base_bdevs_operational": 4, 00:14:34.819 "process": { 00:14:34.819 "type": "rebuild", 00:14:34.819 "target": "spare", 00:14:34.819 "progress": { 00:14:34.819 "blocks": 21120, 00:14:34.819 "percent": 11 00:14:34.819 } 00:14:34.819 }, 00:14:34.819 "base_bdevs_list": [ 00:14:34.819 { 00:14:34.819 "name": "spare", 00:14:34.819 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:34.819 "is_configured": true, 00:14:34.819 "data_offset": 2048, 00:14:34.819 "data_size": 63488 00:14:34.819 }, 00:14:34.819 { 00:14:34.819 "name": "BaseBdev2", 00:14:34.819 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:34.819 "is_configured": true, 00:14:34.819 "data_offset": 2048, 00:14:34.819 "data_size": 63488 00:14:34.819 }, 00:14:34.819 { 00:14:34.819 "name": "BaseBdev3", 00:14:34.819 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:34.819 "is_configured": true, 00:14:34.819 "data_offset": 2048, 00:14:34.819 "data_size": 63488 00:14:34.819 }, 00:14:34.819 { 00:14:34.819 "name": "BaseBdev4", 00:14:34.819 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:34.819 "is_configured": true, 00:14:34.819 "data_offset": 2048, 00:14:34.819 "data_size": 63488 00:14:34.819 } 00:14:34.819 ] 00:14:34.819 }' 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.819 23:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.211 "name": "raid_bdev1", 00:14:36.211 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:36.211 "strip_size_kb": 64, 00:14:36.211 "state": "online", 00:14:36.211 "raid_level": "raid5f", 00:14:36.211 "superblock": true, 00:14:36.211 "num_base_bdevs": 4, 00:14:36.211 "num_base_bdevs_discovered": 4, 00:14:36.211 "num_base_bdevs_operational": 4, 00:14:36.211 "process": { 00:14:36.211 "type": "rebuild", 00:14:36.211 "target": "spare", 00:14:36.211 "progress": { 00:14:36.211 "blocks": 42240, 00:14:36.211 "percent": 22 00:14:36.211 } 00:14:36.211 }, 00:14:36.211 "base_bdevs_list": [ 00:14:36.211 { 00:14:36.211 "name": "spare", 00:14:36.211 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:36.211 "is_configured": true, 00:14:36.211 "data_offset": 2048, 00:14:36.211 "data_size": 63488 00:14:36.211 }, 00:14:36.211 { 00:14:36.211 "name": "BaseBdev2", 00:14:36.211 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:36.211 "is_configured": true, 00:14:36.211 "data_offset": 2048, 00:14:36.211 "data_size": 63488 00:14:36.211 }, 00:14:36.211 { 00:14:36.211 "name": "BaseBdev3", 00:14:36.211 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:36.211 "is_configured": true, 00:14:36.211 "data_offset": 2048, 00:14:36.211 "data_size": 63488 00:14:36.211 }, 00:14:36.211 { 00:14:36.211 "name": "BaseBdev4", 00:14:36.211 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:36.211 "is_configured": true, 00:14:36.211 "data_offset": 2048, 00:14:36.211 "data_size": 63488 00:14:36.211 } 00:14:36.211 ] 00:14:36.211 }' 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.211 23:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.211 23:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.211 23:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.168 "name": "raid_bdev1", 00:14:37.168 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:37.168 "strip_size_kb": 64, 00:14:37.168 "state": "online", 00:14:37.168 "raid_level": "raid5f", 00:14:37.168 "superblock": true, 00:14:37.168 "num_base_bdevs": 4, 00:14:37.168 "num_base_bdevs_discovered": 4, 00:14:37.168 "num_base_bdevs_operational": 4, 00:14:37.168 "process": { 00:14:37.168 "type": "rebuild", 00:14:37.168 "target": "spare", 00:14:37.168 "progress": { 00:14:37.168 "blocks": 65280, 00:14:37.168 "percent": 34 00:14:37.168 } 00:14:37.168 }, 00:14:37.168 "base_bdevs_list": [ 00:14:37.168 { 00:14:37.168 "name": "spare", 00:14:37.168 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:37.168 "is_configured": true, 00:14:37.168 "data_offset": 2048, 00:14:37.168 "data_size": 63488 00:14:37.168 }, 00:14:37.168 { 00:14:37.168 "name": "BaseBdev2", 00:14:37.168 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:37.168 "is_configured": true, 00:14:37.168 "data_offset": 2048, 00:14:37.168 "data_size": 63488 00:14:37.168 }, 00:14:37.168 { 00:14:37.168 "name": "BaseBdev3", 00:14:37.168 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:37.168 "is_configured": true, 00:14:37.168 "data_offset": 2048, 00:14:37.168 "data_size": 63488 00:14:37.168 }, 00:14:37.168 { 00:14:37.168 "name": "BaseBdev4", 00:14:37.168 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:37.168 "is_configured": true, 00:14:37.168 "data_offset": 2048, 00:14:37.168 "data_size": 63488 00:14:37.168 } 00:14:37.168 ] 00:14:37.168 }' 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.168 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.169 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.169 23:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.110 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.110 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.110 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.110 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.110 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.110 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.110 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.110 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.110 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.110 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.370 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.370 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.370 "name": "raid_bdev1", 00:14:38.370 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:38.370 "strip_size_kb": 64, 00:14:38.370 "state": "online", 00:14:38.370 "raid_level": "raid5f", 00:14:38.370 "superblock": true, 00:14:38.370 "num_base_bdevs": 4, 00:14:38.370 "num_base_bdevs_discovered": 4, 00:14:38.370 "num_base_bdevs_operational": 4, 00:14:38.370 "process": { 00:14:38.370 "type": "rebuild", 00:14:38.370 "target": "spare", 00:14:38.370 "progress": { 00:14:38.370 "blocks": 86400, 00:14:38.370 "percent": 45 00:14:38.370 } 00:14:38.370 }, 00:14:38.370 "base_bdevs_list": [ 00:14:38.370 { 00:14:38.370 "name": "spare", 00:14:38.370 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:38.370 "is_configured": true, 00:14:38.370 "data_offset": 2048, 00:14:38.370 "data_size": 63488 00:14:38.370 }, 00:14:38.370 { 00:14:38.370 "name": "BaseBdev2", 00:14:38.370 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:38.370 "is_configured": true, 00:14:38.370 "data_offset": 2048, 00:14:38.370 "data_size": 63488 00:14:38.370 }, 00:14:38.370 { 00:14:38.370 "name": "BaseBdev3", 00:14:38.370 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:38.370 "is_configured": true, 00:14:38.370 "data_offset": 2048, 00:14:38.370 "data_size": 63488 00:14:38.370 }, 00:14:38.370 { 00:14:38.370 "name": "BaseBdev4", 00:14:38.370 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:38.370 "is_configured": true, 00:14:38.370 "data_offset": 2048, 00:14:38.370 "data_size": 63488 00:14:38.370 } 00:14:38.370 ] 00:14:38.370 }' 00:14:38.370 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.370 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.370 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.370 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.370 23:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.310 "name": "raid_bdev1", 00:14:39.310 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:39.310 "strip_size_kb": 64, 00:14:39.310 "state": "online", 00:14:39.310 "raid_level": "raid5f", 00:14:39.310 "superblock": true, 00:14:39.310 "num_base_bdevs": 4, 00:14:39.310 "num_base_bdevs_discovered": 4, 00:14:39.310 "num_base_bdevs_operational": 4, 00:14:39.310 "process": { 00:14:39.310 "type": "rebuild", 00:14:39.310 "target": "spare", 00:14:39.310 "progress": { 00:14:39.310 "blocks": 109440, 00:14:39.310 "percent": 57 00:14:39.310 } 00:14:39.310 }, 00:14:39.310 "base_bdevs_list": [ 00:14:39.310 { 00:14:39.310 "name": "spare", 00:14:39.310 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:39.310 "is_configured": true, 00:14:39.310 "data_offset": 2048, 00:14:39.310 "data_size": 63488 00:14:39.310 }, 00:14:39.310 { 00:14:39.310 "name": "BaseBdev2", 00:14:39.310 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:39.310 "is_configured": true, 00:14:39.310 "data_offset": 2048, 00:14:39.310 "data_size": 63488 00:14:39.310 }, 00:14:39.310 { 00:14:39.310 "name": "BaseBdev3", 00:14:39.310 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:39.310 "is_configured": true, 00:14:39.310 "data_offset": 2048, 00:14:39.310 "data_size": 63488 00:14:39.310 }, 00:14:39.310 { 00:14:39.310 "name": "BaseBdev4", 00:14:39.310 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:39.310 "is_configured": true, 00:14:39.310 "data_offset": 2048, 00:14:39.310 "data_size": 63488 00:14:39.310 } 00:14:39.310 ] 00:14:39.310 }' 00:14:39.310 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.570 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.570 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.570 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.570 23:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.510 "name": "raid_bdev1", 00:14:40.510 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:40.510 "strip_size_kb": 64, 00:14:40.510 "state": "online", 00:14:40.510 "raid_level": "raid5f", 00:14:40.510 "superblock": true, 00:14:40.510 "num_base_bdevs": 4, 00:14:40.510 "num_base_bdevs_discovered": 4, 00:14:40.510 "num_base_bdevs_operational": 4, 00:14:40.510 "process": { 00:14:40.510 "type": "rebuild", 00:14:40.510 "target": "spare", 00:14:40.510 "progress": { 00:14:40.510 "blocks": 130560, 00:14:40.510 "percent": 68 00:14:40.510 } 00:14:40.510 }, 00:14:40.510 "base_bdevs_list": [ 00:14:40.510 { 00:14:40.510 "name": "spare", 00:14:40.510 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:40.510 "is_configured": true, 00:14:40.510 "data_offset": 2048, 00:14:40.510 "data_size": 63488 00:14:40.510 }, 00:14:40.510 { 00:14:40.510 "name": "BaseBdev2", 00:14:40.510 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:40.510 "is_configured": true, 00:14:40.510 "data_offset": 2048, 00:14:40.510 "data_size": 63488 00:14:40.510 }, 00:14:40.510 { 00:14:40.510 "name": "BaseBdev3", 00:14:40.510 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:40.510 "is_configured": true, 00:14:40.510 "data_offset": 2048, 00:14:40.510 "data_size": 63488 00:14:40.510 }, 00:14:40.510 { 00:14:40.510 "name": "BaseBdev4", 00:14:40.510 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:40.510 "is_configured": true, 00:14:40.510 "data_offset": 2048, 00:14:40.510 "data_size": 63488 00:14:40.510 } 00:14:40.510 ] 00:14:40.510 }' 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.510 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.789 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.789 23:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.773 "name": "raid_bdev1", 00:14:41.773 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:41.773 "strip_size_kb": 64, 00:14:41.773 "state": "online", 00:14:41.773 "raid_level": "raid5f", 00:14:41.773 "superblock": true, 00:14:41.773 "num_base_bdevs": 4, 00:14:41.773 "num_base_bdevs_discovered": 4, 00:14:41.773 "num_base_bdevs_operational": 4, 00:14:41.773 "process": { 00:14:41.773 "type": "rebuild", 00:14:41.773 "target": "spare", 00:14:41.773 "progress": { 00:14:41.773 "blocks": 153600, 00:14:41.773 "percent": 80 00:14:41.773 } 00:14:41.773 }, 00:14:41.773 "base_bdevs_list": [ 00:14:41.773 { 00:14:41.773 "name": "spare", 00:14:41.773 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:41.773 "is_configured": true, 00:14:41.773 "data_offset": 2048, 00:14:41.773 "data_size": 63488 00:14:41.773 }, 00:14:41.773 { 00:14:41.773 "name": "BaseBdev2", 00:14:41.773 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:41.773 "is_configured": true, 00:14:41.773 "data_offset": 2048, 00:14:41.773 "data_size": 63488 00:14:41.773 }, 00:14:41.773 { 00:14:41.773 "name": "BaseBdev3", 00:14:41.773 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:41.773 "is_configured": true, 00:14:41.773 "data_offset": 2048, 00:14:41.773 "data_size": 63488 00:14:41.773 }, 00:14:41.773 { 00:14:41.773 "name": "BaseBdev4", 00:14:41.773 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:41.773 "is_configured": true, 00:14:41.773 "data_offset": 2048, 00:14:41.773 "data_size": 63488 00:14:41.773 } 00:14:41.773 ] 00:14:41.773 }' 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.773 23:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.713 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.713 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.713 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.713 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.713 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.713 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.972 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.972 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.972 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.972 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.972 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.972 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.972 "name": "raid_bdev1", 00:14:42.972 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:42.972 "strip_size_kb": 64, 00:14:42.972 "state": "online", 00:14:42.972 "raid_level": "raid5f", 00:14:42.972 "superblock": true, 00:14:42.972 "num_base_bdevs": 4, 00:14:42.972 "num_base_bdevs_discovered": 4, 00:14:42.972 "num_base_bdevs_operational": 4, 00:14:42.972 "process": { 00:14:42.972 "type": "rebuild", 00:14:42.972 "target": "spare", 00:14:42.972 "progress": { 00:14:42.972 "blocks": 174720, 00:14:42.972 "percent": 91 00:14:42.972 } 00:14:42.972 }, 00:14:42.972 "base_bdevs_list": [ 00:14:42.972 { 00:14:42.972 "name": "spare", 00:14:42.972 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:42.972 "is_configured": true, 00:14:42.972 "data_offset": 2048, 00:14:42.972 "data_size": 63488 00:14:42.972 }, 00:14:42.972 { 00:14:42.972 "name": "BaseBdev2", 00:14:42.972 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:42.972 "is_configured": true, 00:14:42.972 "data_offset": 2048, 00:14:42.972 "data_size": 63488 00:14:42.972 }, 00:14:42.972 { 00:14:42.972 "name": "BaseBdev3", 00:14:42.972 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:42.972 "is_configured": true, 00:14:42.972 "data_offset": 2048, 00:14:42.972 "data_size": 63488 00:14:42.972 }, 00:14:42.972 { 00:14:42.972 "name": "BaseBdev4", 00:14:42.973 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:42.973 "is_configured": true, 00:14:42.973 "data_offset": 2048, 00:14:42.973 "data_size": 63488 00:14:42.973 } 00:14:42.973 ] 00:14:42.973 }' 00:14:42.973 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.973 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.973 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.973 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.973 23:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.911 [2024-11-02 23:54:37.642176] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:43.911 [2024-11-02 23:54:37.642288] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:43.911 [2024-11-02 23:54:37.642449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.911 23:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.911 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.911 "name": "raid_bdev1", 00:14:43.911 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:43.911 "strip_size_kb": 64, 00:14:43.911 "state": "online", 00:14:43.911 "raid_level": "raid5f", 00:14:43.911 "superblock": true, 00:14:43.911 "num_base_bdevs": 4, 00:14:43.911 "num_base_bdevs_discovered": 4, 00:14:43.911 "num_base_bdevs_operational": 4, 00:14:43.911 "base_bdevs_list": [ 00:14:43.911 { 00:14:43.911 "name": "spare", 00:14:43.911 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:43.911 "is_configured": true, 00:14:43.911 "data_offset": 2048, 00:14:43.911 "data_size": 63488 00:14:43.911 }, 00:14:43.911 { 00:14:43.911 "name": "BaseBdev2", 00:14:43.911 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:43.911 "is_configured": true, 00:14:43.911 "data_offset": 2048, 00:14:43.911 "data_size": 63488 00:14:43.911 }, 00:14:43.911 { 00:14:43.911 "name": "BaseBdev3", 00:14:43.912 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:43.912 "is_configured": true, 00:14:43.912 "data_offset": 2048, 00:14:43.912 "data_size": 63488 00:14:43.912 }, 00:14:43.912 { 00:14:43.912 "name": "BaseBdev4", 00:14:43.912 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:43.912 "is_configured": true, 00:14:43.912 "data_offset": 2048, 00:14:43.912 "data_size": 63488 00:14:43.912 } 00:14:43.912 ] 00:14:43.912 }' 00:14:44.171 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.171 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:44.171 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.172 "name": "raid_bdev1", 00:14:44.172 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:44.172 "strip_size_kb": 64, 00:14:44.172 "state": "online", 00:14:44.172 "raid_level": "raid5f", 00:14:44.172 "superblock": true, 00:14:44.172 "num_base_bdevs": 4, 00:14:44.172 "num_base_bdevs_discovered": 4, 00:14:44.172 "num_base_bdevs_operational": 4, 00:14:44.172 "base_bdevs_list": [ 00:14:44.172 { 00:14:44.172 "name": "spare", 00:14:44.172 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:44.172 "is_configured": true, 00:14:44.172 "data_offset": 2048, 00:14:44.172 "data_size": 63488 00:14:44.172 }, 00:14:44.172 { 00:14:44.172 "name": "BaseBdev2", 00:14:44.172 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:44.172 "is_configured": true, 00:14:44.172 "data_offset": 2048, 00:14:44.172 "data_size": 63488 00:14:44.172 }, 00:14:44.172 { 00:14:44.172 "name": "BaseBdev3", 00:14:44.172 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:44.172 "is_configured": true, 00:14:44.172 "data_offset": 2048, 00:14:44.172 "data_size": 63488 00:14:44.172 }, 00:14:44.172 { 00:14:44.172 "name": "BaseBdev4", 00:14:44.172 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:44.172 "is_configured": true, 00:14:44.172 "data_offset": 2048, 00:14:44.172 "data_size": 63488 00:14:44.172 } 00:14:44.172 ] 00:14:44.172 }' 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.172 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.431 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.431 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.431 "name": "raid_bdev1", 00:14:44.431 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:44.431 "strip_size_kb": 64, 00:14:44.431 "state": "online", 00:14:44.431 "raid_level": "raid5f", 00:14:44.431 "superblock": true, 00:14:44.431 "num_base_bdevs": 4, 00:14:44.431 "num_base_bdevs_discovered": 4, 00:14:44.431 "num_base_bdevs_operational": 4, 00:14:44.431 "base_bdevs_list": [ 00:14:44.431 { 00:14:44.431 "name": "spare", 00:14:44.431 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:44.432 "is_configured": true, 00:14:44.432 "data_offset": 2048, 00:14:44.432 "data_size": 63488 00:14:44.432 }, 00:14:44.432 { 00:14:44.432 "name": "BaseBdev2", 00:14:44.432 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:44.432 "is_configured": true, 00:14:44.432 "data_offset": 2048, 00:14:44.432 "data_size": 63488 00:14:44.432 }, 00:14:44.432 { 00:14:44.432 "name": "BaseBdev3", 00:14:44.432 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:44.432 "is_configured": true, 00:14:44.432 "data_offset": 2048, 00:14:44.432 "data_size": 63488 00:14:44.432 }, 00:14:44.432 { 00:14:44.432 "name": "BaseBdev4", 00:14:44.432 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:44.432 "is_configured": true, 00:14:44.432 "data_offset": 2048, 00:14:44.432 "data_size": 63488 00:14:44.432 } 00:14:44.432 ] 00:14:44.432 }' 00:14:44.432 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.432 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.692 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:44.692 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.692 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.692 [2024-11-02 23:54:38.754121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.692 [2024-11-02 23:54:38.754200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.692 [2024-11-02 23:54:38.754297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.692 [2024-11-02 23:54:38.754414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.692 [2024-11-02 23:54:38.754460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:44.692 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.692 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.692 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:44.692 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.692 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.692 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:44.951 23:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:44.951 /dev/nbd0 00:14:44.951 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.212 1+0 records in 00:14:45.212 1+0 records out 00:14:45.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549135 s, 7.5 MB/s 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:45.212 /dev/nbd1 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:45.212 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.473 1+0 records in 00:14:45.473 1+0 records out 00:14:45.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328458 s, 12.5 MB/s 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.473 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:45.731 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:45.731 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:45.731 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:45.731 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.731 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.731 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:45.732 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:45.732 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.732 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.732 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:45.990 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.991 [2024-11-02 23:54:39.855902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.991 [2024-11-02 23:54:39.855964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.991 [2024-11-02 23:54:39.855985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:45.991 [2024-11-02 23:54:39.855996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.991 [2024-11-02 23:54:39.858162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.991 [2024-11-02 23:54:39.858203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.991 [2024-11-02 23:54:39.858282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:45.991 [2024-11-02 23:54:39.858347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.991 [2024-11-02 23:54:39.858462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.991 [2024-11-02 23:54:39.858612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.991 [2024-11-02 23:54:39.858678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:45.991 spare 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.991 [2024-11-02 23:54:39.958586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:14:45.991 [2024-11-02 23:54:39.958615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:45.991 [2024-11-02 23:54:39.958891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:14:45.991 [2024-11-02 23:54:39.959353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:14:45.991 [2024-11-02 23:54:39.959378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:14:45.991 [2024-11-02 23:54:39.959517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.991 23:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.991 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.991 "name": "raid_bdev1", 00:14:45.991 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:45.991 "strip_size_kb": 64, 00:14:45.991 "state": "online", 00:14:45.991 "raid_level": "raid5f", 00:14:45.991 "superblock": true, 00:14:45.991 "num_base_bdevs": 4, 00:14:45.991 "num_base_bdevs_discovered": 4, 00:14:45.991 "num_base_bdevs_operational": 4, 00:14:45.991 "base_bdevs_list": [ 00:14:45.991 { 00:14:45.991 "name": "spare", 00:14:45.991 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:45.991 "is_configured": true, 00:14:45.991 "data_offset": 2048, 00:14:45.991 "data_size": 63488 00:14:45.991 }, 00:14:45.991 { 00:14:45.991 "name": "BaseBdev2", 00:14:45.991 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:45.991 "is_configured": true, 00:14:45.991 "data_offset": 2048, 00:14:45.991 "data_size": 63488 00:14:45.991 }, 00:14:45.991 { 00:14:45.991 "name": "BaseBdev3", 00:14:45.991 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:45.991 "is_configured": true, 00:14:45.991 "data_offset": 2048, 00:14:45.991 "data_size": 63488 00:14:45.991 }, 00:14:45.991 { 00:14:45.991 "name": "BaseBdev4", 00:14:45.991 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:45.991 "is_configured": true, 00:14:45.991 "data_offset": 2048, 00:14:45.991 "data_size": 63488 00:14:45.991 } 00:14:45.991 ] 00:14:45.991 }' 00:14:45.991 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.991 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.571 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.571 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.571 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.571 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.572 "name": "raid_bdev1", 00:14:46.572 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:46.572 "strip_size_kb": 64, 00:14:46.572 "state": "online", 00:14:46.572 "raid_level": "raid5f", 00:14:46.572 "superblock": true, 00:14:46.572 "num_base_bdevs": 4, 00:14:46.572 "num_base_bdevs_discovered": 4, 00:14:46.572 "num_base_bdevs_operational": 4, 00:14:46.572 "base_bdevs_list": [ 00:14:46.572 { 00:14:46.572 "name": "spare", 00:14:46.572 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:46.572 "is_configured": true, 00:14:46.572 "data_offset": 2048, 00:14:46.572 "data_size": 63488 00:14:46.572 }, 00:14:46.572 { 00:14:46.572 "name": "BaseBdev2", 00:14:46.572 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:46.572 "is_configured": true, 00:14:46.572 "data_offset": 2048, 00:14:46.572 "data_size": 63488 00:14:46.572 }, 00:14:46.572 { 00:14:46.572 "name": "BaseBdev3", 00:14:46.572 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:46.572 "is_configured": true, 00:14:46.572 "data_offset": 2048, 00:14:46.572 "data_size": 63488 00:14:46.572 }, 00:14:46.572 { 00:14:46.572 "name": "BaseBdev4", 00:14:46.572 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:46.572 "is_configured": true, 00:14:46.572 "data_offset": 2048, 00:14:46.572 "data_size": 63488 00:14:46.572 } 00:14:46.572 ] 00:14:46.572 }' 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.572 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.573 [2024-11-02 23:54:40.596040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.573 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.573 "name": "raid_bdev1", 00:14:46.573 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:46.573 "strip_size_kb": 64, 00:14:46.573 "state": "online", 00:14:46.573 "raid_level": "raid5f", 00:14:46.573 "superblock": true, 00:14:46.573 "num_base_bdevs": 4, 00:14:46.573 "num_base_bdevs_discovered": 3, 00:14:46.573 "num_base_bdevs_operational": 3, 00:14:46.573 "base_bdevs_list": [ 00:14:46.573 { 00:14:46.573 "name": null, 00:14:46.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.573 "is_configured": false, 00:14:46.573 "data_offset": 0, 00:14:46.573 "data_size": 63488 00:14:46.573 }, 00:14:46.573 { 00:14:46.573 "name": "BaseBdev2", 00:14:46.573 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:46.573 "is_configured": true, 00:14:46.573 "data_offset": 2048, 00:14:46.573 "data_size": 63488 00:14:46.573 }, 00:14:46.573 { 00:14:46.574 "name": "BaseBdev3", 00:14:46.574 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:46.574 "is_configured": true, 00:14:46.574 "data_offset": 2048, 00:14:46.574 "data_size": 63488 00:14:46.574 }, 00:14:46.574 { 00:14:46.574 "name": "BaseBdev4", 00:14:46.574 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:46.574 "is_configured": true, 00:14:46.574 "data_offset": 2048, 00:14:46.574 "data_size": 63488 00:14:46.574 } 00:14:46.574 ] 00:14:46.574 }' 00:14:46.574 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.574 23:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.146 23:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.146 23:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.146 23:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.146 [2024-11-02 23:54:41.047397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.146 [2024-11-02 23:54:41.047759] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:47.146 [2024-11-02 23:54:41.047832] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:47.146 [2024-11-02 23:54:41.047971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.146 [2024-11-02 23:54:41.055189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:14:47.146 23:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.146 23:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:47.146 [2024-11-02 23:54:41.057781] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.086 "name": "raid_bdev1", 00:14:48.086 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:48.086 "strip_size_kb": 64, 00:14:48.086 "state": "online", 00:14:48.086 "raid_level": "raid5f", 00:14:48.086 "superblock": true, 00:14:48.086 "num_base_bdevs": 4, 00:14:48.086 "num_base_bdevs_discovered": 4, 00:14:48.086 "num_base_bdevs_operational": 4, 00:14:48.086 "process": { 00:14:48.086 "type": "rebuild", 00:14:48.086 "target": "spare", 00:14:48.086 "progress": { 00:14:48.086 "blocks": 19200, 00:14:48.086 "percent": 10 00:14:48.086 } 00:14:48.086 }, 00:14:48.086 "base_bdevs_list": [ 00:14:48.086 { 00:14:48.086 "name": "spare", 00:14:48.086 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:48.086 "is_configured": true, 00:14:48.086 "data_offset": 2048, 00:14:48.086 "data_size": 63488 00:14:48.086 }, 00:14:48.086 { 00:14:48.086 "name": "BaseBdev2", 00:14:48.086 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:48.086 "is_configured": true, 00:14:48.086 "data_offset": 2048, 00:14:48.086 "data_size": 63488 00:14:48.086 }, 00:14:48.086 { 00:14:48.086 "name": "BaseBdev3", 00:14:48.086 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:48.086 "is_configured": true, 00:14:48.086 "data_offset": 2048, 00:14:48.086 "data_size": 63488 00:14:48.086 }, 00:14:48.086 { 00:14:48.086 "name": "BaseBdev4", 00:14:48.086 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:48.086 "is_configured": true, 00:14:48.086 "data_offset": 2048, 00:14:48.086 "data_size": 63488 00:14:48.086 } 00:14:48.086 ] 00:14:48.086 }' 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.086 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.346 [2024-11-02 23:54:42.225390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.346 [2024-11-02 23:54:42.264515] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.346 [2024-11-02 23:54:42.264579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.346 [2024-11-02 23:54:42.264603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.346 [2024-11-02 23:54:42.264612] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.346 "name": "raid_bdev1", 00:14:48.346 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:48.346 "strip_size_kb": 64, 00:14:48.346 "state": "online", 00:14:48.346 "raid_level": "raid5f", 00:14:48.346 "superblock": true, 00:14:48.346 "num_base_bdevs": 4, 00:14:48.346 "num_base_bdevs_discovered": 3, 00:14:48.346 "num_base_bdevs_operational": 3, 00:14:48.346 "base_bdevs_list": [ 00:14:48.346 { 00:14:48.346 "name": null, 00:14:48.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.346 "is_configured": false, 00:14:48.346 "data_offset": 0, 00:14:48.346 "data_size": 63488 00:14:48.346 }, 00:14:48.346 { 00:14:48.346 "name": "BaseBdev2", 00:14:48.346 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:48.346 "is_configured": true, 00:14:48.346 "data_offset": 2048, 00:14:48.346 "data_size": 63488 00:14:48.346 }, 00:14:48.346 { 00:14:48.346 "name": "BaseBdev3", 00:14:48.346 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:48.346 "is_configured": true, 00:14:48.346 "data_offset": 2048, 00:14:48.346 "data_size": 63488 00:14:48.346 }, 00:14:48.346 { 00:14:48.346 "name": "BaseBdev4", 00:14:48.346 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:48.346 "is_configured": true, 00:14:48.346 "data_offset": 2048, 00:14:48.346 "data_size": 63488 00:14:48.346 } 00:14:48.346 ] 00:14:48.346 }' 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.346 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.915 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:48.915 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.915 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.915 [2024-11-02 23:54:42.744682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:48.915 [2024-11-02 23:54:42.744755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.915 [2024-11-02 23:54:42.744805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:48.915 [2024-11-02 23:54:42.744822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.915 [2024-11-02 23:54:42.745323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.915 [2024-11-02 23:54:42.745354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:48.915 [2024-11-02 23:54:42.745455] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:48.915 [2024-11-02 23:54:42.745468] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:48.915 [2024-11-02 23:54:42.745488] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:48.915 [2024-11-02 23:54:42.745527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.915 [2024-11-02 23:54:42.751179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:14:48.915 spare 00:14:48.915 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.915 23:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:48.915 [2024-11-02 23:54:42.753606] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.853 "name": "raid_bdev1", 00:14:49.853 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:49.853 "strip_size_kb": 64, 00:14:49.853 "state": "online", 00:14:49.853 "raid_level": "raid5f", 00:14:49.853 "superblock": true, 00:14:49.853 "num_base_bdevs": 4, 00:14:49.853 "num_base_bdevs_discovered": 4, 00:14:49.853 "num_base_bdevs_operational": 4, 00:14:49.853 "process": { 00:14:49.853 "type": "rebuild", 00:14:49.853 "target": "spare", 00:14:49.853 "progress": { 00:14:49.853 "blocks": 19200, 00:14:49.853 "percent": 10 00:14:49.853 } 00:14:49.853 }, 00:14:49.853 "base_bdevs_list": [ 00:14:49.853 { 00:14:49.853 "name": "spare", 00:14:49.853 "uuid": "8932201c-5ef4-55ef-9105-ad89aa1fbab4", 00:14:49.853 "is_configured": true, 00:14:49.853 "data_offset": 2048, 00:14:49.853 "data_size": 63488 00:14:49.853 }, 00:14:49.853 { 00:14:49.853 "name": "BaseBdev2", 00:14:49.853 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:49.853 "is_configured": true, 00:14:49.853 "data_offset": 2048, 00:14:49.853 "data_size": 63488 00:14:49.853 }, 00:14:49.853 { 00:14:49.853 "name": "BaseBdev3", 00:14:49.853 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:49.853 "is_configured": true, 00:14:49.853 "data_offset": 2048, 00:14:49.853 "data_size": 63488 00:14:49.853 }, 00:14:49.853 { 00:14:49.853 "name": "BaseBdev4", 00:14:49.853 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:49.853 "is_configured": true, 00:14:49.853 "data_offset": 2048, 00:14:49.853 "data_size": 63488 00:14:49.853 } 00:14:49.853 ] 00:14:49.853 }' 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.853 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.853 [2024-11-02 23:54:43.913698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.111 [2024-11-02 23:54:43.959962] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:50.111 [2024-11-02 23:54:43.960036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.111 [2024-11-02 23:54:43.960055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.111 [2024-11-02 23:54:43.960068] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.111 23:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.111 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.111 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.111 "name": "raid_bdev1", 00:14:50.111 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:50.111 "strip_size_kb": 64, 00:14:50.111 "state": "online", 00:14:50.111 "raid_level": "raid5f", 00:14:50.111 "superblock": true, 00:14:50.111 "num_base_bdevs": 4, 00:14:50.111 "num_base_bdevs_discovered": 3, 00:14:50.111 "num_base_bdevs_operational": 3, 00:14:50.111 "base_bdevs_list": [ 00:14:50.111 { 00:14:50.111 "name": null, 00:14:50.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.111 "is_configured": false, 00:14:50.111 "data_offset": 0, 00:14:50.111 "data_size": 63488 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "BaseBdev2", 00:14:50.111 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:50.111 "is_configured": true, 00:14:50.111 "data_offset": 2048, 00:14:50.111 "data_size": 63488 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "BaseBdev3", 00:14:50.111 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:50.111 "is_configured": true, 00:14:50.111 "data_offset": 2048, 00:14:50.111 "data_size": 63488 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "BaseBdev4", 00:14:50.111 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:50.111 "is_configured": true, 00:14:50.111 "data_offset": 2048, 00:14:50.111 "data_size": 63488 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 }' 00:14:50.111 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.111 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.370 "name": "raid_bdev1", 00:14:50.370 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:50.370 "strip_size_kb": 64, 00:14:50.370 "state": "online", 00:14:50.370 "raid_level": "raid5f", 00:14:50.370 "superblock": true, 00:14:50.370 "num_base_bdevs": 4, 00:14:50.370 "num_base_bdevs_discovered": 3, 00:14:50.370 "num_base_bdevs_operational": 3, 00:14:50.370 "base_bdevs_list": [ 00:14:50.370 { 00:14:50.370 "name": null, 00:14:50.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.370 "is_configured": false, 00:14:50.370 "data_offset": 0, 00:14:50.370 "data_size": 63488 00:14:50.370 }, 00:14:50.370 { 00:14:50.370 "name": "BaseBdev2", 00:14:50.370 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:50.370 "is_configured": true, 00:14:50.370 "data_offset": 2048, 00:14:50.370 "data_size": 63488 00:14:50.370 }, 00:14:50.370 { 00:14:50.370 "name": "BaseBdev3", 00:14:50.370 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:50.370 "is_configured": true, 00:14:50.370 "data_offset": 2048, 00:14:50.370 "data_size": 63488 00:14:50.370 }, 00:14:50.370 { 00:14:50.370 "name": "BaseBdev4", 00:14:50.370 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:50.370 "is_configured": true, 00:14:50.370 "data_offset": 2048, 00:14:50.370 "data_size": 63488 00:14:50.370 } 00:14:50.370 ] 00:14:50.370 }' 00:14:50.370 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.630 [2024-11-02 23:54:44.547664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:50.630 [2024-11-02 23:54:44.547721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.630 [2024-11-02 23:54:44.547753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:50.630 [2024-11-02 23:54:44.547769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.630 [2024-11-02 23:54:44.548239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.630 [2024-11-02 23:54:44.548270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:50.630 [2024-11-02 23:54:44.548344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:50.630 [2024-11-02 23:54:44.548372] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:50.630 [2024-11-02 23:54:44.548382] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:50.630 [2024-11-02 23:54:44.548400] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:50.630 BaseBdev1 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.630 23:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.567 "name": "raid_bdev1", 00:14:51.567 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:51.567 "strip_size_kb": 64, 00:14:51.567 "state": "online", 00:14:51.567 "raid_level": "raid5f", 00:14:51.567 "superblock": true, 00:14:51.567 "num_base_bdevs": 4, 00:14:51.567 "num_base_bdevs_discovered": 3, 00:14:51.567 "num_base_bdevs_operational": 3, 00:14:51.567 "base_bdevs_list": [ 00:14:51.567 { 00:14:51.567 "name": null, 00:14:51.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.567 "is_configured": false, 00:14:51.567 "data_offset": 0, 00:14:51.567 "data_size": 63488 00:14:51.567 }, 00:14:51.567 { 00:14:51.567 "name": "BaseBdev2", 00:14:51.567 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:51.567 "is_configured": true, 00:14:51.567 "data_offset": 2048, 00:14:51.567 "data_size": 63488 00:14:51.567 }, 00:14:51.567 { 00:14:51.567 "name": "BaseBdev3", 00:14:51.567 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:51.567 "is_configured": true, 00:14:51.567 "data_offset": 2048, 00:14:51.567 "data_size": 63488 00:14:51.567 }, 00:14:51.567 { 00:14:51.567 "name": "BaseBdev4", 00:14:51.567 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:51.567 "is_configured": true, 00:14:51.567 "data_offset": 2048, 00:14:51.567 "data_size": 63488 00:14:51.567 } 00:14:51.567 ] 00:14:51.567 }' 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.567 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.143 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.143 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.143 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.143 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.143 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.143 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.143 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.143 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.143 23:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.143 "name": "raid_bdev1", 00:14:52.143 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:52.143 "strip_size_kb": 64, 00:14:52.143 "state": "online", 00:14:52.143 "raid_level": "raid5f", 00:14:52.143 "superblock": true, 00:14:52.143 "num_base_bdevs": 4, 00:14:52.143 "num_base_bdevs_discovered": 3, 00:14:52.143 "num_base_bdevs_operational": 3, 00:14:52.143 "base_bdevs_list": [ 00:14:52.143 { 00:14:52.143 "name": null, 00:14:52.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.143 "is_configured": false, 00:14:52.143 "data_offset": 0, 00:14:52.143 "data_size": 63488 00:14:52.143 }, 00:14:52.143 { 00:14:52.143 "name": "BaseBdev2", 00:14:52.143 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:52.143 "is_configured": true, 00:14:52.143 "data_offset": 2048, 00:14:52.143 "data_size": 63488 00:14:52.143 }, 00:14:52.143 { 00:14:52.143 "name": "BaseBdev3", 00:14:52.143 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:52.143 "is_configured": true, 00:14:52.143 "data_offset": 2048, 00:14:52.143 "data_size": 63488 00:14:52.143 }, 00:14:52.143 { 00:14:52.143 "name": "BaseBdev4", 00:14:52.143 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:52.143 "is_configured": true, 00:14:52.143 "data_offset": 2048, 00:14:52.143 "data_size": 63488 00:14:52.143 } 00:14:52.143 ] 00:14:52.143 }' 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.143 [2024-11-02 23:54:46.149440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.143 [2024-11-02 23:54:46.149582] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:52.143 [2024-11-02 23:54:46.149595] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:52.143 request: 00:14:52.143 { 00:14:52.143 "base_bdev": "BaseBdev1", 00:14:52.143 "raid_bdev": "raid_bdev1", 00:14:52.143 "method": "bdev_raid_add_base_bdev", 00:14:52.143 "req_id": 1 00:14:52.143 } 00:14:52.143 Got JSON-RPC error response 00:14:52.143 response: 00:14:52.143 { 00:14:52.143 "code": -22, 00:14:52.143 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:52.143 } 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:52.143 23:54:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.080 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.339 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.339 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.339 "name": "raid_bdev1", 00:14:53.339 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:53.339 "strip_size_kb": 64, 00:14:53.339 "state": "online", 00:14:53.339 "raid_level": "raid5f", 00:14:53.339 "superblock": true, 00:14:53.339 "num_base_bdevs": 4, 00:14:53.339 "num_base_bdevs_discovered": 3, 00:14:53.339 "num_base_bdevs_operational": 3, 00:14:53.339 "base_bdevs_list": [ 00:14:53.339 { 00:14:53.339 "name": null, 00:14:53.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.339 "is_configured": false, 00:14:53.339 "data_offset": 0, 00:14:53.339 "data_size": 63488 00:14:53.339 }, 00:14:53.339 { 00:14:53.339 "name": "BaseBdev2", 00:14:53.339 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:53.339 "is_configured": true, 00:14:53.339 "data_offset": 2048, 00:14:53.339 "data_size": 63488 00:14:53.339 }, 00:14:53.339 { 00:14:53.339 "name": "BaseBdev3", 00:14:53.339 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:53.339 "is_configured": true, 00:14:53.339 "data_offset": 2048, 00:14:53.339 "data_size": 63488 00:14:53.339 }, 00:14:53.339 { 00:14:53.339 "name": "BaseBdev4", 00:14:53.339 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:53.339 "is_configured": true, 00:14:53.339 "data_offset": 2048, 00:14:53.339 "data_size": 63488 00:14:53.339 } 00:14:53.339 ] 00:14:53.339 }' 00:14:53.339 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.339 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.604 "name": "raid_bdev1", 00:14:53.604 "uuid": "7980ea6c-5b5b-4f2f-a461-b163b142de73", 00:14:53.604 "strip_size_kb": 64, 00:14:53.604 "state": "online", 00:14:53.604 "raid_level": "raid5f", 00:14:53.604 "superblock": true, 00:14:53.604 "num_base_bdevs": 4, 00:14:53.604 "num_base_bdevs_discovered": 3, 00:14:53.604 "num_base_bdevs_operational": 3, 00:14:53.604 "base_bdevs_list": [ 00:14:53.604 { 00:14:53.604 "name": null, 00:14:53.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.604 "is_configured": false, 00:14:53.604 "data_offset": 0, 00:14:53.604 "data_size": 63488 00:14:53.604 }, 00:14:53.604 { 00:14:53.604 "name": "BaseBdev2", 00:14:53.604 "uuid": "5f759f5e-0232-5d03-b4d5-95b5a5d6a5ef", 00:14:53.604 "is_configured": true, 00:14:53.604 "data_offset": 2048, 00:14:53.604 "data_size": 63488 00:14:53.604 }, 00:14:53.604 { 00:14:53.604 "name": "BaseBdev3", 00:14:53.604 "uuid": "e2142cf4-7531-54b1-8b81-136aa354a055", 00:14:53.604 "is_configured": true, 00:14:53.604 "data_offset": 2048, 00:14:53.604 "data_size": 63488 00:14:53.604 }, 00:14:53.604 { 00:14:53.604 "name": "BaseBdev4", 00:14:53.604 "uuid": "87dfe053-f451-5bed-9c21-1a6d5f418de7", 00:14:53.604 "is_configured": true, 00:14:53.604 "data_offset": 2048, 00:14:53.604 "data_size": 63488 00:14:53.604 } 00:14:53.604 ] 00:14:53.604 }' 00:14:53.604 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95295 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 95295 ']' 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 95295 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95295 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95295' 00:14:53.869 killing process with pid 95295 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 95295 00:14:53.869 Received shutdown signal, test time was about 60.000000 seconds 00:14:53.869 00:14:53.869 Latency(us) 00:14:53.869 [2024-11-02T23:54:47.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.869 [2024-11-02T23:54:47.964Z] =================================================================================================================== 00:14:53.869 [2024-11-02T23:54:47.964Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:53.869 [2024-11-02 23:54:47.777549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.869 23:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 95295 00:14:53.869 [2024-11-02 23:54:47.777667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.869 [2024-11-02 23:54:47.777765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.869 [2024-11-02 23:54:47.777776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:14:53.869 [2024-11-02 23:54:47.871369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.128 23:54:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:54.129 00:14:54.129 real 0m25.511s 00:14:54.129 user 0m32.282s 00:14:54.129 sys 0m3.268s 00:14:54.129 23:54:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:54.129 23:54:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.129 ************************************ 00:14:54.129 END TEST raid5f_rebuild_test_sb 00:14:54.129 ************************************ 00:14:54.392 23:54:48 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:14:54.392 23:54:48 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:14:54.392 23:54:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:54.392 23:54:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.392 23:54:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 ************************************ 00:14:54.392 START TEST raid_state_function_test_sb_4k 00:14:54.392 ************************************ 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96101 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96101' 00:14:54.392 Process raid pid: 96101 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96101 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 96101 ']' 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:54.392 23:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 [2024-11-02 23:54:48.354340] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:14:54.392 [2024-11-02 23:54:48.354525] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.651 [2024-11-02 23:54:48.512031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.651 [2024-11-02 23:54:48.550711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.651 [2024-11-02 23:54:48.626691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.651 [2024-11-02 23:54:48.626874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.219 [2024-11-02 23:54:49.169879] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.219 [2024-11-02 23:54:49.169943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.219 [2024-11-02 23:54:49.169967] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.219 [2024-11-02 23:54:49.169983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.219 "name": "Existed_Raid", 00:14:55.219 "uuid": "dbe3c64c-653e-4464-a9eb-9c44d0c80aea", 00:14:55.219 "strip_size_kb": 0, 00:14:55.219 "state": "configuring", 00:14:55.219 "raid_level": "raid1", 00:14:55.219 "superblock": true, 00:14:55.219 "num_base_bdevs": 2, 00:14:55.219 "num_base_bdevs_discovered": 0, 00:14:55.219 "num_base_bdevs_operational": 2, 00:14:55.219 "base_bdevs_list": [ 00:14:55.219 { 00:14:55.219 "name": "BaseBdev1", 00:14:55.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.219 "is_configured": false, 00:14:55.219 "data_offset": 0, 00:14:55.219 "data_size": 0 00:14:55.219 }, 00:14:55.219 { 00:14:55.219 "name": "BaseBdev2", 00:14:55.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.219 "is_configured": false, 00:14:55.219 "data_offset": 0, 00:14:55.219 "data_size": 0 00:14:55.219 } 00:14:55.219 ] 00:14:55.219 }' 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.219 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.478 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.478 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.478 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.737 [2024-11-02 23:54:49.573044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.737 [2024-11-02 23:54:49.573154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.737 [2024-11-02 23:54:49.585045] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.737 [2024-11-02 23:54:49.585135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.737 [2024-11-02 23:54:49.585176] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.737 [2024-11-02 23:54:49.585217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.737 [2024-11-02 23:54:49.612299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.737 BaseBdev1 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.737 [ 00:14:55.737 { 00:14:55.737 "name": "BaseBdev1", 00:14:55.737 "aliases": [ 00:14:55.737 "0c888814-a0c8-42af-add8-bb84b08a08bb" 00:14:55.737 ], 00:14:55.737 "product_name": "Malloc disk", 00:14:55.737 "block_size": 4096, 00:14:55.737 "num_blocks": 8192, 00:14:55.737 "uuid": "0c888814-a0c8-42af-add8-bb84b08a08bb", 00:14:55.737 "assigned_rate_limits": { 00:14:55.737 "rw_ios_per_sec": 0, 00:14:55.737 "rw_mbytes_per_sec": 0, 00:14:55.737 "r_mbytes_per_sec": 0, 00:14:55.737 "w_mbytes_per_sec": 0 00:14:55.737 }, 00:14:55.737 "claimed": true, 00:14:55.737 "claim_type": "exclusive_write", 00:14:55.737 "zoned": false, 00:14:55.737 "supported_io_types": { 00:14:55.737 "read": true, 00:14:55.737 "write": true, 00:14:55.737 "unmap": true, 00:14:55.737 "flush": true, 00:14:55.737 "reset": true, 00:14:55.737 "nvme_admin": false, 00:14:55.737 "nvme_io": false, 00:14:55.737 "nvme_io_md": false, 00:14:55.737 "write_zeroes": true, 00:14:55.737 "zcopy": true, 00:14:55.737 "get_zone_info": false, 00:14:55.737 "zone_management": false, 00:14:55.737 "zone_append": false, 00:14:55.737 "compare": false, 00:14:55.737 "compare_and_write": false, 00:14:55.737 "abort": true, 00:14:55.737 "seek_hole": false, 00:14:55.737 "seek_data": false, 00:14:55.737 "copy": true, 00:14:55.737 "nvme_iov_md": false 00:14:55.737 }, 00:14:55.737 "memory_domains": [ 00:14:55.737 { 00:14:55.737 "dma_device_id": "system", 00:14:55.737 "dma_device_type": 1 00:14:55.737 }, 00:14:55.737 { 00:14:55.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.737 "dma_device_type": 2 00:14:55.737 } 00:14:55.737 ], 00:14:55.737 "driver_specific": {} 00:14:55.737 } 00:14:55.737 ] 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.737 "name": "Existed_Raid", 00:14:55.737 "uuid": "49475aab-5815-44c5-b6e6-57671f19dc31", 00:14:55.737 "strip_size_kb": 0, 00:14:55.737 "state": "configuring", 00:14:55.737 "raid_level": "raid1", 00:14:55.737 "superblock": true, 00:14:55.737 "num_base_bdevs": 2, 00:14:55.737 "num_base_bdevs_discovered": 1, 00:14:55.737 "num_base_bdevs_operational": 2, 00:14:55.737 "base_bdevs_list": [ 00:14:55.737 { 00:14:55.737 "name": "BaseBdev1", 00:14:55.737 "uuid": "0c888814-a0c8-42af-add8-bb84b08a08bb", 00:14:55.737 "is_configured": true, 00:14:55.737 "data_offset": 256, 00:14:55.737 "data_size": 7936 00:14:55.737 }, 00:14:55.737 { 00:14:55.737 "name": "BaseBdev2", 00:14:55.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.737 "is_configured": false, 00:14:55.737 "data_offset": 0, 00:14:55.737 "data_size": 0 00:14:55.737 } 00:14:55.737 ] 00:14:55.737 }' 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.737 23:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.325 [2024-11-02 23:54:50.115468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.325 [2024-11-02 23:54:50.115561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.325 [2024-11-02 23:54:50.127461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.325 [2024-11-02 23:54:50.129586] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.325 [2024-11-02 23:54:50.129638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.325 "name": "Existed_Raid", 00:14:56.325 "uuid": "396eaec8-81e4-4535-8e53-aa993b30cf04", 00:14:56.325 "strip_size_kb": 0, 00:14:56.325 "state": "configuring", 00:14:56.325 "raid_level": "raid1", 00:14:56.325 "superblock": true, 00:14:56.325 "num_base_bdevs": 2, 00:14:56.325 "num_base_bdevs_discovered": 1, 00:14:56.325 "num_base_bdevs_operational": 2, 00:14:56.325 "base_bdevs_list": [ 00:14:56.325 { 00:14:56.325 "name": "BaseBdev1", 00:14:56.325 "uuid": "0c888814-a0c8-42af-add8-bb84b08a08bb", 00:14:56.325 "is_configured": true, 00:14:56.325 "data_offset": 256, 00:14:56.325 "data_size": 7936 00:14:56.325 }, 00:14:56.325 { 00:14:56.325 "name": "BaseBdev2", 00:14:56.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.325 "is_configured": false, 00:14:56.325 "data_offset": 0, 00:14:56.325 "data_size": 0 00:14:56.325 } 00:14:56.325 ] 00:14:56.325 }' 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.325 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.586 [2024-11-02 23:54:50.539399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.586 [2024-11-02 23:54:50.539701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:56.586 [2024-11-02 23:54:50.539789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:56.586 [2024-11-02 23:54:50.540116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:56.586 BaseBdev2 00:14:56.586 [2024-11-02 23:54:50.540331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:56.586 [2024-11-02 23:54:50.540399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:56.586 [2024-11-02 23:54:50.540590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.586 [ 00:14:56.586 { 00:14:56.586 "name": "BaseBdev2", 00:14:56.586 "aliases": [ 00:14:56.586 "57692096-c353-47f7-936b-50c026324ac7" 00:14:56.586 ], 00:14:56.586 "product_name": "Malloc disk", 00:14:56.586 "block_size": 4096, 00:14:56.586 "num_blocks": 8192, 00:14:56.586 "uuid": "57692096-c353-47f7-936b-50c026324ac7", 00:14:56.586 "assigned_rate_limits": { 00:14:56.586 "rw_ios_per_sec": 0, 00:14:56.586 "rw_mbytes_per_sec": 0, 00:14:56.586 "r_mbytes_per_sec": 0, 00:14:56.586 "w_mbytes_per_sec": 0 00:14:56.586 }, 00:14:56.586 "claimed": true, 00:14:56.586 "claim_type": "exclusive_write", 00:14:56.586 "zoned": false, 00:14:56.586 "supported_io_types": { 00:14:56.586 "read": true, 00:14:56.586 "write": true, 00:14:56.586 "unmap": true, 00:14:56.586 "flush": true, 00:14:56.586 "reset": true, 00:14:56.586 "nvme_admin": false, 00:14:56.586 "nvme_io": false, 00:14:56.586 "nvme_io_md": false, 00:14:56.586 "write_zeroes": true, 00:14:56.586 "zcopy": true, 00:14:56.586 "get_zone_info": false, 00:14:56.586 "zone_management": false, 00:14:56.586 "zone_append": false, 00:14:56.586 "compare": false, 00:14:56.586 "compare_and_write": false, 00:14:56.586 "abort": true, 00:14:56.586 "seek_hole": false, 00:14:56.586 "seek_data": false, 00:14:56.586 "copy": true, 00:14:56.586 "nvme_iov_md": false 00:14:56.586 }, 00:14:56.586 "memory_domains": [ 00:14:56.586 { 00:14:56.586 "dma_device_id": "system", 00:14:56.586 "dma_device_type": 1 00:14:56.586 }, 00:14:56.586 { 00:14:56.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.586 "dma_device_type": 2 00:14:56.586 } 00:14:56.586 ], 00:14:56.586 "driver_specific": {} 00:14:56.586 } 00:14:56.586 ] 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.586 "name": "Existed_Raid", 00:14:56.586 "uuid": "396eaec8-81e4-4535-8e53-aa993b30cf04", 00:14:56.586 "strip_size_kb": 0, 00:14:56.586 "state": "online", 00:14:56.586 "raid_level": "raid1", 00:14:56.586 "superblock": true, 00:14:56.586 "num_base_bdevs": 2, 00:14:56.586 "num_base_bdevs_discovered": 2, 00:14:56.586 "num_base_bdevs_operational": 2, 00:14:56.586 "base_bdevs_list": [ 00:14:56.586 { 00:14:56.586 "name": "BaseBdev1", 00:14:56.586 "uuid": "0c888814-a0c8-42af-add8-bb84b08a08bb", 00:14:56.586 "is_configured": true, 00:14:56.586 "data_offset": 256, 00:14:56.586 "data_size": 7936 00:14:56.586 }, 00:14:56.586 { 00:14:56.586 "name": "BaseBdev2", 00:14:56.586 "uuid": "57692096-c353-47f7-936b-50c026324ac7", 00:14:56.586 "is_configured": true, 00:14:56.586 "data_offset": 256, 00:14:56.586 "data_size": 7936 00:14:56.586 } 00:14:56.586 ] 00:14:56.586 }' 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.586 23:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.155 [2024-11-02 23:54:51.054884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.155 "name": "Existed_Raid", 00:14:57.155 "aliases": [ 00:14:57.155 "396eaec8-81e4-4535-8e53-aa993b30cf04" 00:14:57.155 ], 00:14:57.155 "product_name": "Raid Volume", 00:14:57.155 "block_size": 4096, 00:14:57.155 "num_blocks": 7936, 00:14:57.155 "uuid": "396eaec8-81e4-4535-8e53-aa993b30cf04", 00:14:57.155 "assigned_rate_limits": { 00:14:57.155 "rw_ios_per_sec": 0, 00:14:57.155 "rw_mbytes_per_sec": 0, 00:14:57.155 "r_mbytes_per_sec": 0, 00:14:57.155 "w_mbytes_per_sec": 0 00:14:57.155 }, 00:14:57.155 "claimed": false, 00:14:57.155 "zoned": false, 00:14:57.155 "supported_io_types": { 00:14:57.155 "read": true, 00:14:57.155 "write": true, 00:14:57.155 "unmap": false, 00:14:57.155 "flush": false, 00:14:57.155 "reset": true, 00:14:57.155 "nvme_admin": false, 00:14:57.155 "nvme_io": false, 00:14:57.155 "nvme_io_md": false, 00:14:57.155 "write_zeroes": true, 00:14:57.155 "zcopy": false, 00:14:57.155 "get_zone_info": false, 00:14:57.155 "zone_management": false, 00:14:57.155 "zone_append": false, 00:14:57.155 "compare": false, 00:14:57.155 "compare_and_write": false, 00:14:57.155 "abort": false, 00:14:57.155 "seek_hole": false, 00:14:57.155 "seek_data": false, 00:14:57.155 "copy": false, 00:14:57.155 "nvme_iov_md": false 00:14:57.155 }, 00:14:57.155 "memory_domains": [ 00:14:57.155 { 00:14:57.155 "dma_device_id": "system", 00:14:57.155 "dma_device_type": 1 00:14:57.155 }, 00:14:57.155 { 00:14:57.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.155 "dma_device_type": 2 00:14:57.155 }, 00:14:57.155 { 00:14:57.155 "dma_device_id": "system", 00:14:57.155 "dma_device_type": 1 00:14:57.155 }, 00:14:57.155 { 00:14:57.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.155 "dma_device_type": 2 00:14:57.155 } 00:14:57.155 ], 00:14:57.155 "driver_specific": { 00:14:57.155 "raid": { 00:14:57.155 "uuid": "396eaec8-81e4-4535-8e53-aa993b30cf04", 00:14:57.155 "strip_size_kb": 0, 00:14:57.155 "state": "online", 00:14:57.155 "raid_level": "raid1", 00:14:57.155 "superblock": true, 00:14:57.155 "num_base_bdevs": 2, 00:14:57.155 "num_base_bdevs_discovered": 2, 00:14:57.155 "num_base_bdevs_operational": 2, 00:14:57.155 "base_bdevs_list": [ 00:14:57.155 { 00:14:57.155 "name": "BaseBdev1", 00:14:57.155 "uuid": "0c888814-a0c8-42af-add8-bb84b08a08bb", 00:14:57.155 "is_configured": true, 00:14:57.155 "data_offset": 256, 00:14:57.155 "data_size": 7936 00:14:57.155 }, 00:14:57.155 { 00:14:57.155 "name": "BaseBdev2", 00:14:57.155 "uuid": "57692096-c353-47f7-936b-50c026324ac7", 00:14:57.155 "is_configured": true, 00:14:57.155 "data_offset": 256, 00:14:57.155 "data_size": 7936 00:14:57.155 } 00:14:57.155 ] 00:14:57.155 } 00:14:57.155 } 00:14:57.155 }' 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:57.155 BaseBdev2' 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.155 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.415 [2024-11-02 23:54:51.286294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.415 "name": "Existed_Raid", 00:14:57.415 "uuid": "396eaec8-81e4-4535-8e53-aa993b30cf04", 00:14:57.415 "strip_size_kb": 0, 00:14:57.415 "state": "online", 00:14:57.415 "raid_level": "raid1", 00:14:57.415 "superblock": true, 00:14:57.415 "num_base_bdevs": 2, 00:14:57.415 "num_base_bdevs_discovered": 1, 00:14:57.415 "num_base_bdevs_operational": 1, 00:14:57.415 "base_bdevs_list": [ 00:14:57.415 { 00:14:57.415 "name": null, 00:14:57.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.415 "is_configured": false, 00:14:57.415 "data_offset": 0, 00:14:57.415 "data_size": 7936 00:14:57.415 }, 00:14:57.415 { 00:14:57.415 "name": "BaseBdev2", 00:14:57.415 "uuid": "57692096-c353-47f7-936b-50c026324ac7", 00:14:57.415 "is_configured": true, 00:14:57.415 "data_offset": 256, 00:14:57.415 "data_size": 7936 00:14:57.415 } 00:14:57.415 ] 00:14:57.415 }' 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.415 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:57.675 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.676 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.676 [2024-11-02 23:54:51.734549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:57.676 [2024-11-02 23:54:51.734720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.676 [2024-11-02 23:54:51.755760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.676 [2024-11-02 23:54:51.755886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.676 [2024-11-02 23:54:51.755936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:57.676 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.676 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:57.676 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.676 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.676 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:57.676 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.676 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96101 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 96101 ']' 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 96101 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96101 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96101' 00:14:57.935 killing process with pid 96101 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 96101 00:14:57.935 [2024-11-02 23:54:51.840309] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.935 23:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 96101 00:14:57.935 [2024-11-02 23:54:51.841893] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.195 ************************************ 00:14:58.195 END TEST raid_state_function_test_sb_4k 00:14:58.195 ************************************ 00:14:58.195 23:54:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:14:58.195 00:14:58.195 real 0m3.912s 00:14:58.195 user 0m5.952s 00:14:58.195 sys 0m0.896s 00:14:58.195 23:54:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:58.195 23:54:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.195 23:54:52 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:14:58.195 23:54:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:58.195 23:54:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:58.195 23:54:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.195 ************************************ 00:14:58.195 START TEST raid_superblock_test_4k 00:14:58.195 ************************************ 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96338 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96338 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 96338 ']' 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:58.195 23:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.455 [2024-11-02 23:54:52.344624] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:14:58.455 [2024-11-02 23:54:52.344872] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96338 ] 00:14:58.455 [2024-11-02 23:54:52.501115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.455 [2024-11-02 23:54:52.544109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.715 [2024-11-02 23:54:52.621325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.715 [2024-11-02 23:54:52.621459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.284 malloc1 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.284 [2024-11-02 23:54:53.170203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.284 [2024-11-02 23:54:53.170367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.284 [2024-11-02 23:54:53.170412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:59.284 [2024-11-02 23:54:53.170447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.284 [2024-11-02 23:54:53.172526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.284 [2024-11-02 23:54:53.172615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.284 pt1 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.284 malloc2 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.284 [2024-11-02 23:54:53.198867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:59.284 [2024-11-02 23:54:53.198918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.284 [2024-11-02 23:54:53.198934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:59.284 [2024-11-02 23:54:53.198944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.284 [2024-11-02 23:54:53.200982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.284 [2024-11-02 23:54:53.201019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:59.284 pt2 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.284 [2024-11-02 23:54:53.210893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:59.284 [2024-11-02 23:54:53.212707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.284 [2024-11-02 23:54:53.212934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:59.284 [2024-11-02 23:54:53.212954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:59.284 [2024-11-02 23:54:53.213224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:59.284 [2024-11-02 23:54:53.213374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:59.284 [2024-11-02 23:54:53.213389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:59.284 [2024-11-02 23:54:53.213526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.284 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.285 "name": "raid_bdev1", 00:14:59.285 "uuid": "c6fa6b84-223f-48c4-b1e0-7d026addad81", 00:14:59.285 "strip_size_kb": 0, 00:14:59.285 "state": "online", 00:14:59.285 "raid_level": "raid1", 00:14:59.285 "superblock": true, 00:14:59.285 "num_base_bdevs": 2, 00:14:59.285 "num_base_bdevs_discovered": 2, 00:14:59.285 "num_base_bdevs_operational": 2, 00:14:59.285 "base_bdevs_list": [ 00:14:59.285 { 00:14:59.285 "name": "pt1", 00:14:59.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.285 "is_configured": true, 00:14:59.285 "data_offset": 256, 00:14:59.285 "data_size": 7936 00:14:59.285 }, 00:14:59.285 { 00:14:59.285 "name": "pt2", 00:14:59.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.285 "is_configured": true, 00:14:59.285 "data_offset": 256, 00:14:59.285 "data_size": 7936 00:14:59.285 } 00:14:59.285 ] 00:14:59.285 }' 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.285 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.852 [2024-11-02 23:54:53.654428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.852 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.852 "name": "raid_bdev1", 00:14:59.852 "aliases": [ 00:14:59.852 "c6fa6b84-223f-48c4-b1e0-7d026addad81" 00:14:59.852 ], 00:14:59.852 "product_name": "Raid Volume", 00:14:59.852 "block_size": 4096, 00:14:59.852 "num_blocks": 7936, 00:14:59.852 "uuid": "c6fa6b84-223f-48c4-b1e0-7d026addad81", 00:14:59.852 "assigned_rate_limits": { 00:14:59.852 "rw_ios_per_sec": 0, 00:14:59.852 "rw_mbytes_per_sec": 0, 00:14:59.852 "r_mbytes_per_sec": 0, 00:14:59.852 "w_mbytes_per_sec": 0 00:14:59.852 }, 00:14:59.852 "claimed": false, 00:14:59.852 "zoned": false, 00:14:59.852 "supported_io_types": { 00:14:59.852 "read": true, 00:14:59.852 "write": true, 00:14:59.852 "unmap": false, 00:14:59.852 "flush": false, 00:14:59.852 "reset": true, 00:14:59.852 "nvme_admin": false, 00:14:59.852 "nvme_io": false, 00:14:59.852 "nvme_io_md": false, 00:14:59.852 "write_zeroes": true, 00:14:59.852 "zcopy": false, 00:14:59.852 "get_zone_info": false, 00:14:59.852 "zone_management": false, 00:14:59.852 "zone_append": false, 00:14:59.852 "compare": false, 00:14:59.852 "compare_and_write": false, 00:14:59.852 "abort": false, 00:14:59.852 "seek_hole": false, 00:14:59.852 "seek_data": false, 00:14:59.852 "copy": false, 00:14:59.852 "nvme_iov_md": false 00:14:59.852 }, 00:14:59.852 "memory_domains": [ 00:14:59.852 { 00:14:59.852 "dma_device_id": "system", 00:14:59.852 "dma_device_type": 1 00:14:59.852 }, 00:14:59.852 { 00:14:59.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.852 "dma_device_type": 2 00:14:59.852 }, 00:14:59.852 { 00:14:59.852 "dma_device_id": "system", 00:14:59.852 "dma_device_type": 1 00:14:59.852 }, 00:14:59.852 { 00:14:59.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.852 "dma_device_type": 2 00:14:59.852 } 00:14:59.852 ], 00:14:59.852 "driver_specific": { 00:14:59.852 "raid": { 00:14:59.852 "uuid": "c6fa6b84-223f-48c4-b1e0-7d026addad81", 00:14:59.852 "strip_size_kb": 0, 00:14:59.852 "state": "online", 00:14:59.852 "raid_level": "raid1", 00:14:59.852 "superblock": true, 00:14:59.852 "num_base_bdevs": 2, 00:14:59.852 "num_base_bdevs_discovered": 2, 00:14:59.852 "num_base_bdevs_operational": 2, 00:14:59.852 "base_bdevs_list": [ 00:14:59.852 { 00:14:59.853 "name": "pt1", 00:14:59.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.853 "is_configured": true, 00:14:59.853 "data_offset": 256, 00:14:59.853 "data_size": 7936 00:14:59.853 }, 00:14:59.853 { 00:14:59.853 "name": "pt2", 00:14:59.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.853 "is_configured": true, 00:14:59.853 "data_offset": 256, 00:14:59.853 "data_size": 7936 00:14:59.853 } 00:14:59.853 ] 00:14:59.853 } 00:14:59.853 } 00:14:59.853 }' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:59.853 pt2' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:59.853 [2024-11-02 23:54:53.854000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c6fa6b84-223f-48c4-b1e0-7d026addad81 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z c6fa6b84-223f-48c4-b1e0-7d026addad81 ']' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.853 [2024-11-02 23:54:53.901688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.853 [2024-11-02 23:54:53.901715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.853 [2024-11-02 23:54:53.901791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.853 [2024-11-02 23:54:53.901848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.853 [2024-11-02 23:54:53.901857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:59.853 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.112 23:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.112 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.112 [2024-11-02 23:54:54.041458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:00.112 [2024-11-02 23:54:54.043250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:00.112 [2024-11-02 23:54:54.043307] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:00.112 [2024-11-02 23:54:54.043349] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:00.113 [2024-11-02 23:54:54.043366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.113 [2024-11-02 23:54:54.043375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:00.113 request: 00:15:00.113 { 00:15:00.113 "name": "raid_bdev1", 00:15:00.113 "raid_level": "raid1", 00:15:00.113 "base_bdevs": [ 00:15:00.113 "malloc1", 00:15:00.113 "malloc2" 00:15:00.113 ], 00:15:00.113 "superblock": false, 00:15:00.113 "method": "bdev_raid_create", 00:15:00.113 "req_id": 1 00:15:00.113 } 00:15:00.113 Got JSON-RPC error response 00:15:00.113 response: 00:15:00.113 { 00:15:00.113 "code": -17, 00:15:00.113 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:00.113 } 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.113 [2024-11-02 23:54:54.093358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:00.113 [2024-11-02 23:54:54.093455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.113 [2024-11-02 23:54:54.093489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:00.113 [2024-11-02 23:54:54.093515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.113 [2024-11-02 23:54:54.095561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.113 [2024-11-02 23:54:54.095633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:00.113 [2024-11-02 23:54:54.095716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:00.113 [2024-11-02 23:54:54.095786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:00.113 pt1 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.113 "name": "raid_bdev1", 00:15:00.113 "uuid": "c6fa6b84-223f-48c4-b1e0-7d026addad81", 00:15:00.113 "strip_size_kb": 0, 00:15:00.113 "state": "configuring", 00:15:00.113 "raid_level": "raid1", 00:15:00.113 "superblock": true, 00:15:00.113 "num_base_bdevs": 2, 00:15:00.113 "num_base_bdevs_discovered": 1, 00:15:00.113 "num_base_bdevs_operational": 2, 00:15:00.113 "base_bdevs_list": [ 00:15:00.113 { 00:15:00.113 "name": "pt1", 00:15:00.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.113 "is_configured": true, 00:15:00.113 "data_offset": 256, 00:15:00.113 "data_size": 7936 00:15:00.113 }, 00:15:00.113 { 00:15:00.113 "name": null, 00:15:00.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.113 "is_configured": false, 00:15:00.113 "data_offset": 256, 00:15:00.113 "data_size": 7936 00:15:00.113 } 00:15:00.113 ] 00:15:00.113 }' 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.113 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.690 [2024-11-02 23:54:54.496678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.690 [2024-11-02 23:54:54.496803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.690 [2024-11-02 23:54:54.496841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:00.690 [2024-11-02 23:54:54.496870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.690 [2024-11-02 23:54:54.497229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.690 [2024-11-02 23:54:54.497286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.690 [2024-11-02 23:54:54.497372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:00.690 [2024-11-02 23:54:54.497419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.690 [2024-11-02 23:54:54.497533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:00.690 [2024-11-02 23:54:54.497570] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:00.690 [2024-11-02 23:54:54.497817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:00.690 [2024-11-02 23:54:54.497934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:00.690 [2024-11-02 23:54:54.497949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:00.690 [2024-11-02 23:54:54.498046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.690 pt2 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.690 "name": "raid_bdev1", 00:15:00.690 "uuid": "c6fa6b84-223f-48c4-b1e0-7d026addad81", 00:15:00.690 "strip_size_kb": 0, 00:15:00.690 "state": "online", 00:15:00.690 "raid_level": "raid1", 00:15:00.690 "superblock": true, 00:15:00.690 "num_base_bdevs": 2, 00:15:00.690 "num_base_bdevs_discovered": 2, 00:15:00.690 "num_base_bdevs_operational": 2, 00:15:00.690 "base_bdevs_list": [ 00:15:00.690 { 00:15:00.690 "name": "pt1", 00:15:00.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.690 "is_configured": true, 00:15:00.690 "data_offset": 256, 00:15:00.690 "data_size": 7936 00:15:00.690 }, 00:15:00.690 { 00:15:00.690 "name": "pt2", 00:15:00.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.690 "is_configured": true, 00:15:00.690 "data_offset": 256, 00:15:00.690 "data_size": 7936 00:15:00.690 } 00:15:00.690 ] 00:15:00.690 }' 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.690 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.951 [2024-11-02 23:54:54.952146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.951 "name": "raid_bdev1", 00:15:00.951 "aliases": [ 00:15:00.951 "c6fa6b84-223f-48c4-b1e0-7d026addad81" 00:15:00.951 ], 00:15:00.951 "product_name": "Raid Volume", 00:15:00.951 "block_size": 4096, 00:15:00.951 "num_blocks": 7936, 00:15:00.951 "uuid": "c6fa6b84-223f-48c4-b1e0-7d026addad81", 00:15:00.951 "assigned_rate_limits": { 00:15:00.951 "rw_ios_per_sec": 0, 00:15:00.951 "rw_mbytes_per_sec": 0, 00:15:00.951 "r_mbytes_per_sec": 0, 00:15:00.951 "w_mbytes_per_sec": 0 00:15:00.951 }, 00:15:00.951 "claimed": false, 00:15:00.951 "zoned": false, 00:15:00.951 "supported_io_types": { 00:15:00.951 "read": true, 00:15:00.951 "write": true, 00:15:00.951 "unmap": false, 00:15:00.951 "flush": false, 00:15:00.951 "reset": true, 00:15:00.951 "nvme_admin": false, 00:15:00.951 "nvme_io": false, 00:15:00.951 "nvme_io_md": false, 00:15:00.951 "write_zeroes": true, 00:15:00.951 "zcopy": false, 00:15:00.951 "get_zone_info": false, 00:15:00.951 "zone_management": false, 00:15:00.951 "zone_append": false, 00:15:00.951 "compare": false, 00:15:00.951 "compare_and_write": false, 00:15:00.951 "abort": false, 00:15:00.951 "seek_hole": false, 00:15:00.951 "seek_data": false, 00:15:00.951 "copy": false, 00:15:00.951 "nvme_iov_md": false 00:15:00.951 }, 00:15:00.951 "memory_domains": [ 00:15:00.951 { 00:15:00.951 "dma_device_id": "system", 00:15:00.951 "dma_device_type": 1 00:15:00.951 }, 00:15:00.951 { 00:15:00.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.951 "dma_device_type": 2 00:15:00.951 }, 00:15:00.951 { 00:15:00.951 "dma_device_id": "system", 00:15:00.951 "dma_device_type": 1 00:15:00.951 }, 00:15:00.951 { 00:15:00.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.951 "dma_device_type": 2 00:15:00.951 } 00:15:00.951 ], 00:15:00.951 "driver_specific": { 00:15:00.951 "raid": { 00:15:00.951 "uuid": "c6fa6b84-223f-48c4-b1e0-7d026addad81", 00:15:00.951 "strip_size_kb": 0, 00:15:00.951 "state": "online", 00:15:00.951 "raid_level": "raid1", 00:15:00.951 "superblock": true, 00:15:00.951 "num_base_bdevs": 2, 00:15:00.951 "num_base_bdevs_discovered": 2, 00:15:00.951 "num_base_bdevs_operational": 2, 00:15:00.951 "base_bdevs_list": [ 00:15:00.951 { 00:15:00.951 "name": "pt1", 00:15:00.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.951 "is_configured": true, 00:15:00.951 "data_offset": 256, 00:15:00.951 "data_size": 7936 00:15:00.951 }, 00:15:00.951 { 00:15:00.951 "name": "pt2", 00:15:00.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.951 "is_configured": true, 00:15:00.951 "data_offset": 256, 00:15:00.951 "data_size": 7936 00:15:00.951 } 00:15:00.951 ] 00:15:00.951 } 00:15:00.951 } 00:15:00.951 }' 00:15:00.951 23:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.951 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:00.951 pt2' 00:15:00.951 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.210 [2024-11-02 23:54:55.203703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' c6fa6b84-223f-48c4-b1e0-7d026addad81 '!=' c6fa6b84-223f-48c4-b1e0-7d026addad81 ']' 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.210 [2024-11-02 23:54:55.247417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.210 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.468 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.468 "name": "raid_bdev1", 00:15:01.468 "uuid": "c6fa6b84-223f-48c4-b1e0-7d026addad81", 00:15:01.468 "strip_size_kb": 0, 00:15:01.468 "state": "online", 00:15:01.468 "raid_level": "raid1", 00:15:01.468 "superblock": true, 00:15:01.468 "num_base_bdevs": 2, 00:15:01.468 "num_base_bdevs_discovered": 1, 00:15:01.468 "num_base_bdevs_operational": 1, 00:15:01.468 "base_bdevs_list": [ 00:15:01.468 { 00:15:01.468 "name": null, 00:15:01.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.468 "is_configured": false, 00:15:01.468 "data_offset": 0, 00:15:01.468 "data_size": 7936 00:15:01.468 }, 00:15:01.468 { 00:15:01.468 "name": "pt2", 00:15:01.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.469 "is_configured": true, 00:15:01.469 "data_offset": 256, 00:15:01.469 "data_size": 7936 00:15:01.469 } 00:15:01.469 ] 00:15:01.469 }' 00:15:01.469 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.469 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.727 [2024-11-02 23:54:55.754628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.727 [2024-11-02 23:54:55.754705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.727 [2024-11-02 23:54:55.754796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.727 [2024-11-02 23:54:55.754859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.727 [2024-11-02 23:54:55.754868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.727 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.985 [2024-11-02 23:54:55.826520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.985 [2024-11-02 23:54:55.826658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.985 [2024-11-02 23:54:55.826683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:01.985 [2024-11-02 23:54:55.826692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.985 [2024-11-02 23:54:55.828868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.985 [2024-11-02 23:54:55.828906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.985 [2024-11-02 23:54:55.828978] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:01.985 [2024-11-02 23:54:55.829022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.985 [2024-11-02 23:54:55.829109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:01.985 [2024-11-02 23:54:55.829117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:01.985 [2024-11-02 23:54:55.829344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:01.985 [2024-11-02 23:54:55.829453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:01.985 [2024-11-02 23:54:55.829479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:01.985 [2024-11-02 23:54:55.829580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.985 pt2 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.985 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.986 "name": "raid_bdev1", 00:15:01.986 "uuid": "c6fa6b84-223f-48c4-b1e0-7d026addad81", 00:15:01.986 "strip_size_kb": 0, 00:15:01.986 "state": "online", 00:15:01.986 "raid_level": "raid1", 00:15:01.986 "superblock": true, 00:15:01.986 "num_base_bdevs": 2, 00:15:01.986 "num_base_bdevs_discovered": 1, 00:15:01.986 "num_base_bdevs_operational": 1, 00:15:01.986 "base_bdevs_list": [ 00:15:01.986 { 00:15:01.986 "name": null, 00:15:01.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.986 "is_configured": false, 00:15:01.986 "data_offset": 256, 00:15:01.986 "data_size": 7936 00:15:01.986 }, 00:15:01.986 { 00:15:01.986 "name": "pt2", 00:15:01.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.986 "is_configured": true, 00:15:01.986 "data_offset": 256, 00:15:01.986 "data_size": 7936 00:15:01.986 } 00:15:01.986 ] 00:15:01.986 }' 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.986 23:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.244 [2024-11-02 23:54:56.257804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.244 [2024-11-02 23:54:56.257884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.244 [2024-11-02 23:54:56.257963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.244 [2024-11-02 23:54:56.258022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.244 [2024-11-02 23:54:56.258096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.244 [2024-11-02 23:54:56.321685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:02.244 [2024-11-02 23:54:56.321800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.244 [2024-11-02 23:54:56.321834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:02.244 [2024-11-02 23:54:56.321867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.244 [2024-11-02 23:54:56.323969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.244 [2024-11-02 23:54:56.324042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:02.244 [2024-11-02 23:54:56.324130] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:02.244 [2024-11-02 23:54:56.324189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.244 [2024-11-02 23:54:56.324317] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:02.244 [2024-11-02 23:54:56.324387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.244 [2024-11-02 23:54:56.324461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:02.244 [2024-11-02 23:54:56.324539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.244 [2024-11-02 23:54:56.324635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:02.244 [2024-11-02 23:54:56.324675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:02.244 [2024-11-02 23:54:56.324947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:02.244 [2024-11-02 23:54:56.325097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:02.244 [2024-11-02 23:54:56.325139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:02.244 [2024-11-02 23:54:56.325279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.244 pt1 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.244 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.502 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.502 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.502 "name": "raid_bdev1", 00:15:02.502 "uuid": "c6fa6b84-223f-48c4-b1e0-7d026addad81", 00:15:02.502 "strip_size_kb": 0, 00:15:02.502 "state": "online", 00:15:02.502 "raid_level": "raid1", 00:15:02.502 "superblock": true, 00:15:02.502 "num_base_bdevs": 2, 00:15:02.502 "num_base_bdevs_discovered": 1, 00:15:02.502 "num_base_bdevs_operational": 1, 00:15:02.502 "base_bdevs_list": [ 00:15:02.502 { 00:15:02.502 "name": null, 00:15:02.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.502 "is_configured": false, 00:15:02.502 "data_offset": 256, 00:15:02.502 "data_size": 7936 00:15:02.502 }, 00:15:02.502 { 00:15:02.502 "name": "pt2", 00:15:02.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.502 "is_configured": true, 00:15:02.502 "data_offset": 256, 00:15:02.502 "data_size": 7936 00:15:02.502 } 00:15:02.502 ] 00:15:02.502 }' 00:15:02.502 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.502 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.761 [2024-11-02 23:54:56.813050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' c6fa6b84-223f-48c4-b1e0-7d026addad81 '!=' c6fa6b84-223f-48c4-b1e0-7d026addad81 ']' 00:15:02.761 23:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96338 00:15:03.019 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 96338 ']' 00:15:03.019 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 96338 00:15:03.019 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:15:03.019 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:03.019 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96338 00:15:03.019 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:03.019 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:03.019 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96338' 00:15:03.019 killing process with pid 96338 00:15:03.019 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 96338 00:15:03.019 [2024-11-02 23:54:56.898068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.019 [2024-11-02 23:54:56.898207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.019 23:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 96338 00:15:03.019 [2024-11-02 23:54:56.898284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.019 [2024-11-02 23:54:56.898297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:03.019 [2024-11-02 23:54:56.921054] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.278 23:54:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:03.278 00:15:03.278 real 0m4.885s 00:15:03.278 user 0m7.919s 00:15:03.278 sys 0m1.096s 00:15:03.278 23:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:03.278 23:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.278 ************************************ 00:15:03.278 END TEST raid_superblock_test_4k 00:15:03.278 ************************************ 00:15:03.278 23:54:57 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:03.278 23:54:57 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:03.278 23:54:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:03.278 23:54:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:03.278 23:54:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.278 ************************************ 00:15:03.278 START TEST raid_rebuild_test_sb_4k 00:15:03.278 ************************************ 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96649 00:15:03.278 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:03.279 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96649 00:15:03.279 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 96649 ']' 00:15:03.279 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.279 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:03.279 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.279 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:03.279 23:54:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.279 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:03.279 Zero copy mechanism will not be used. 00:15:03.279 [2024-11-02 23:54:57.302522] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:15:03.279 [2024-11-02 23:54:57.302762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96649 ] 00:15:03.538 [2024-11-02 23:54:57.458262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.538 [2024-11-02 23:54:57.483524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.538 [2024-11-02 23:54:57.526442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.538 [2024-11-02 23:54:57.526480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.106 BaseBdev1_malloc 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.106 [2024-11-02 23:54:58.132756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:04.106 [2024-11-02 23:54:58.132838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.106 [2024-11-02 23:54:58.132862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:04.106 [2024-11-02 23:54:58.132875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.106 [2024-11-02 23:54:58.134880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.106 [2024-11-02 23:54:58.134918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.106 BaseBdev1 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.106 BaseBdev2_malloc 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.106 [2024-11-02 23:54:58.153227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:04.106 [2024-11-02 23:54:58.153349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.106 [2024-11-02 23:54:58.153373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:04.106 [2024-11-02 23:54:58.153393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.106 [2024-11-02 23:54:58.155339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.106 [2024-11-02 23:54:58.155380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:04.106 BaseBdev2 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.106 spare_malloc 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.106 spare_delay 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.106 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.106 [2024-11-02 23:54:58.193644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:04.106 [2024-11-02 23:54:58.193697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.106 [2024-11-02 23:54:58.193717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:04.106 [2024-11-02 23:54:58.193725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.106 [2024-11-02 23:54:58.195844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.106 [2024-11-02 23:54:58.195945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:04.365 spare 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.365 [2024-11-02 23:54:58.205652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.365 [2024-11-02 23:54:58.207450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.365 [2024-11-02 23:54:58.207663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:04.365 [2024-11-02 23:54:58.207681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:04.365 [2024-11-02 23:54:58.207932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:04.365 [2024-11-02 23:54:58.208066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:04.365 [2024-11-02 23:54:58.208078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:04.365 [2024-11-02 23:54:58.208180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.365 "name": "raid_bdev1", 00:15:04.365 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:04.365 "strip_size_kb": 0, 00:15:04.365 "state": "online", 00:15:04.365 "raid_level": "raid1", 00:15:04.365 "superblock": true, 00:15:04.365 "num_base_bdevs": 2, 00:15:04.365 "num_base_bdevs_discovered": 2, 00:15:04.365 "num_base_bdevs_operational": 2, 00:15:04.365 "base_bdevs_list": [ 00:15:04.365 { 00:15:04.365 "name": "BaseBdev1", 00:15:04.365 "uuid": "46dcfcc9-2489-567e-b3f2-0a30a536305f", 00:15:04.365 "is_configured": true, 00:15:04.365 "data_offset": 256, 00:15:04.365 "data_size": 7936 00:15:04.365 }, 00:15:04.365 { 00:15:04.365 "name": "BaseBdev2", 00:15:04.365 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:04.365 "is_configured": true, 00:15:04.365 "data_offset": 256, 00:15:04.365 "data_size": 7936 00:15:04.365 } 00:15:04.365 ] 00:15:04.365 }' 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.365 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:04.624 [2024-11-02 23:54:58.633129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.624 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:04.883 [2024-11-02 23:54:58.904466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:04.883 /dev/nbd0 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.883 1+0 records in 00:15:04.883 1+0 records out 00:15:04.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566896 s, 7.2 MB/s 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:04.883 23:54:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:05.818 7936+0 records in 00:15:05.818 7936+0 records out 00:15:05.818 32505856 bytes (33 MB, 31 MiB) copied, 0.63407 s, 51.3 MB/s 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.818 [2024-11-02 23:54:59.816407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.818 [2024-11-02 23:54:59.832469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.818 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.819 "name": "raid_bdev1", 00:15:05.819 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:05.819 "strip_size_kb": 0, 00:15:05.819 "state": "online", 00:15:05.819 "raid_level": "raid1", 00:15:05.819 "superblock": true, 00:15:05.819 "num_base_bdevs": 2, 00:15:05.819 "num_base_bdevs_discovered": 1, 00:15:05.819 "num_base_bdevs_operational": 1, 00:15:05.819 "base_bdevs_list": [ 00:15:05.819 { 00:15:05.819 "name": null, 00:15:05.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.819 "is_configured": false, 00:15:05.819 "data_offset": 0, 00:15:05.819 "data_size": 7936 00:15:05.819 }, 00:15:05.819 { 00:15:05.819 "name": "BaseBdev2", 00:15:05.819 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:05.819 "is_configured": true, 00:15:05.819 "data_offset": 256, 00:15:05.819 "data_size": 7936 00:15:05.819 } 00:15:05.819 ] 00:15:05.819 }' 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.819 23:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.385 23:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.385 23:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.385 23:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.385 [2024-11-02 23:55:00.303684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.385 [2024-11-02 23:55:00.321050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:06.385 23:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.385 23:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:06.385 [2024-11-02 23:55:00.327244] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.321 "name": "raid_bdev1", 00:15:07.321 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:07.321 "strip_size_kb": 0, 00:15:07.321 "state": "online", 00:15:07.321 "raid_level": "raid1", 00:15:07.321 "superblock": true, 00:15:07.321 "num_base_bdevs": 2, 00:15:07.321 "num_base_bdevs_discovered": 2, 00:15:07.321 "num_base_bdevs_operational": 2, 00:15:07.321 "process": { 00:15:07.321 "type": "rebuild", 00:15:07.321 "target": "spare", 00:15:07.321 "progress": { 00:15:07.321 "blocks": 2560, 00:15:07.321 "percent": 32 00:15:07.321 } 00:15:07.321 }, 00:15:07.321 "base_bdevs_list": [ 00:15:07.321 { 00:15:07.321 "name": "spare", 00:15:07.321 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:07.321 "is_configured": true, 00:15:07.321 "data_offset": 256, 00:15:07.321 "data_size": 7936 00:15:07.321 }, 00:15:07.321 { 00:15:07.321 "name": "BaseBdev2", 00:15:07.321 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:07.321 "is_configured": true, 00:15:07.321 "data_offset": 256, 00:15:07.321 "data_size": 7936 00:15:07.321 } 00:15:07.321 ] 00:15:07.321 }' 00:15:07.321 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.580 [2024-11-02 23:55:01.506117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.580 [2024-11-02 23:55:01.532535] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:07.580 [2024-11-02 23:55:01.532657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.580 [2024-11-02 23:55:01.532678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.580 [2024-11-02 23:55:01.532699] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.580 "name": "raid_bdev1", 00:15:07.580 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:07.580 "strip_size_kb": 0, 00:15:07.580 "state": "online", 00:15:07.580 "raid_level": "raid1", 00:15:07.580 "superblock": true, 00:15:07.580 "num_base_bdevs": 2, 00:15:07.580 "num_base_bdevs_discovered": 1, 00:15:07.580 "num_base_bdevs_operational": 1, 00:15:07.580 "base_bdevs_list": [ 00:15:07.580 { 00:15:07.580 "name": null, 00:15:07.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.580 "is_configured": false, 00:15:07.580 "data_offset": 0, 00:15:07.580 "data_size": 7936 00:15:07.580 }, 00:15:07.580 { 00:15:07.580 "name": "BaseBdev2", 00:15:07.580 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:07.580 "is_configured": true, 00:15:07.580 "data_offset": 256, 00:15:07.580 "data_size": 7936 00:15:07.580 } 00:15:07.580 ] 00:15:07.580 }' 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.580 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.147 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.147 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.147 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.147 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.147 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.147 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.147 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.147 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.147 23:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.147 "name": "raid_bdev1", 00:15:08.147 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:08.147 "strip_size_kb": 0, 00:15:08.147 "state": "online", 00:15:08.147 "raid_level": "raid1", 00:15:08.147 "superblock": true, 00:15:08.147 "num_base_bdevs": 2, 00:15:08.147 "num_base_bdevs_discovered": 1, 00:15:08.147 "num_base_bdevs_operational": 1, 00:15:08.147 "base_bdevs_list": [ 00:15:08.147 { 00:15:08.147 "name": null, 00:15:08.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.147 "is_configured": false, 00:15:08.147 "data_offset": 0, 00:15:08.147 "data_size": 7936 00:15:08.147 }, 00:15:08.147 { 00:15:08.147 "name": "BaseBdev2", 00:15:08.147 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:08.147 "is_configured": true, 00:15:08.147 "data_offset": 256, 00:15:08.147 "data_size": 7936 00:15:08.147 } 00:15:08.147 ] 00:15:08.147 }' 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.147 [2024-11-02 23:55:02.140736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.147 [2024-11-02 23:55:02.145422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.147 23:55:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:08.147 [2024-11-02 23:55:02.147264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.083 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.083 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.083 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.083 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.083 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.083 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.083 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.083 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.083 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.083 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.356 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.356 "name": "raid_bdev1", 00:15:09.356 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:09.356 "strip_size_kb": 0, 00:15:09.356 "state": "online", 00:15:09.356 "raid_level": "raid1", 00:15:09.356 "superblock": true, 00:15:09.356 "num_base_bdevs": 2, 00:15:09.356 "num_base_bdevs_discovered": 2, 00:15:09.356 "num_base_bdevs_operational": 2, 00:15:09.356 "process": { 00:15:09.356 "type": "rebuild", 00:15:09.356 "target": "spare", 00:15:09.356 "progress": { 00:15:09.356 "blocks": 2560, 00:15:09.357 "percent": 32 00:15:09.357 } 00:15:09.357 }, 00:15:09.357 "base_bdevs_list": [ 00:15:09.357 { 00:15:09.357 "name": "spare", 00:15:09.357 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:09.357 "is_configured": true, 00:15:09.357 "data_offset": 256, 00:15:09.357 "data_size": 7936 00:15:09.357 }, 00:15:09.357 { 00:15:09.357 "name": "BaseBdev2", 00:15:09.357 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:09.357 "is_configured": true, 00:15:09.357 "data_offset": 256, 00:15:09.357 "data_size": 7936 00:15:09.357 } 00:15:09.357 ] 00:15:09.357 }' 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:09.357 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=562 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.357 "name": "raid_bdev1", 00:15:09.357 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:09.357 "strip_size_kb": 0, 00:15:09.357 "state": "online", 00:15:09.357 "raid_level": "raid1", 00:15:09.357 "superblock": true, 00:15:09.357 "num_base_bdevs": 2, 00:15:09.357 "num_base_bdevs_discovered": 2, 00:15:09.357 "num_base_bdevs_operational": 2, 00:15:09.357 "process": { 00:15:09.357 "type": "rebuild", 00:15:09.357 "target": "spare", 00:15:09.357 "progress": { 00:15:09.357 "blocks": 2816, 00:15:09.357 "percent": 35 00:15:09.357 } 00:15:09.357 }, 00:15:09.357 "base_bdevs_list": [ 00:15:09.357 { 00:15:09.357 "name": "spare", 00:15:09.357 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:09.357 "is_configured": true, 00:15:09.357 "data_offset": 256, 00:15:09.357 "data_size": 7936 00:15:09.357 }, 00:15:09.357 { 00:15:09.357 "name": "BaseBdev2", 00:15:09.357 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:09.357 "is_configured": true, 00:15:09.357 "data_offset": 256, 00:15:09.357 "data_size": 7936 00:15:09.357 } 00:15:09.357 ] 00:15:09.357 }' 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.357 23:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.749 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.749 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.749 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.749 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.749 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.749 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.749 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.749 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.750 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.750 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.750 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.750 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.750 "name": "raid_bdev1", 00:15:10.750 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:10.750 "strip_size_kb": 0, 00:15:10.750 "state": "online", 00:15:10.750 "raid_level": "raid1", 00:15:10.750 "superblock": true, 00:15:10.750 "num_base_bdevs": 2, 00:15:10.750 "num_base_bdevs_discovered": 2, 00:15:10.750 "num_base_bdevs_operational": 2, 00:15:10.750 "process": { 00:15:10.750 "type": "rebuild", 00:15:10.750 "target": "spare", 00:15:10.750 "progress": { 00:15:10.750 "blocks": 5632, 00:15:10.750 "percent": 70 00:15:10.750 } 00:15:10.750 }, 00:15:10.750 "base_bdevs_list": [ 00:15:10.750 { 00:15:10.750 "name": "spare", 00:15:10.750 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:10.750 "is_configured": true, 00:15:10.750 "data_offset": 256, 00:15:10.750 "data_size": 7936 00:15:10.750 }, 00:15:10.750 { 00:15:10.750 "name": "BaseBdev2", 00:15:10.750 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:10.750 "is_configured": true, 00:15:10.750 "data_offset": 256, 00:15:10.750 "data_size": 7936 00:15:10.750 } 00:15:10.750 ] 00:15:10.750 }' 00:15:10.750 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.750 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.750 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.750 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.750 23:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.316 [2024-11-02 23:55:05.257886] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:11.316 [2024-11-02 23:55:05.258048] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:11.316 [2024-11-02 23:55:05.258165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.573 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.573 "name": "raid_bdev1", 00:15:11.573 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:11.573 "strip_size_kb": 0, 00:15:11.573 "state": "online", 00:15:11.573 "raid_level": "raid1", 00:15:11.573 "superblock": true, 00:15:11.573 "num_base_bdevs": 2, 00:15:11.573 "num_base_bdevs_discovered": 2, 00:15:11.573 "num_base_bdevs_operational": 2, 00:15:11.573 "base_bdevs_list": [ 00:15:11.573 { 00:15:11.573 "name": "spare", 00:15:11.574 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:11.574 "is_configured": true, 00:15:11.574 "data_offset": 256, 00:15:11.574 "data_size": 7936 00:15:11.574 }, 00:15:11.574 { 00:15:11.574 "name": "BaseBdev2", 00:15:11.574 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:11.574 "is_configured": true, 00:15:11.574 "data_offset": 256, 00:15:11.574 "data_size": 7936 00:15:11.574 } 00:15:11.574 ] 00:15:11.574 }' 00:15:11.574 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.833 "name": "raid_bdev1", 00:15:11.833 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:11.833 "strip_size_kb": 0, 00:15:11.833 "state": "online", 00:15:11.833 "raid_level": "raid1", 00:15:11.833 "superblock": true, 00:15:11.833 "num_base_bdevs": 2, 00:15:11.833 "num_base_bdevs_discovered": 2, 00:15:11.833 "num_base_bdevs_operational": 2, 00:15:11.833 "base_bdevs_list": [ 00:15:11.833 { 00:15:11.833 "name": "spare", 00:15:11.833 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:11.833 "is_configured": true, 00:15:11.833 "data_offset": 256, 00:15:11.833 "data_size": 7936 00:15:11.833 }, 00:15:11.833 { 00:15:11.833 "name": "BaseBdev2", 00:15:11.833 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:11.833 "is_configured": true, 00:15:11.833 "data_offset": 256, 00:15:11.833 "data_size": 7936 00:15:11.833 } 00:15:11.833 ] 00:15:11.833 }' 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.833 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.091 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.091 "name": "raid_bdev1", 00:15:12.091 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:12.091 "strip_size_kb": 0, 00:15:12.091 "state": "online", 00:15:12.091 "raid_level": "raid1", 00:15:12.091 "superblock": true, 00:15:12.091 "num_base_bdevs": 2, 00:15:12.091 "num_base_bdevs_discovered": 2, 00:15:12.091 "num_base_bdevs_operational": 2, 00:15:12.091 "base_bdevs_list": [ 00:15:12.091 { 00:15:12.091 "name": "spare", 00:15:12.091 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:12.091 "is_configured": true, 00:15:12.091 "data_offset": 256, 00:15:12.091 "data_size": 7936 00:15:12.091 }, 00:15:12.091 { 00:15:12.091 "name": "BaseBdev2", 00:15:12.091 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:12.091 "is_configured": true, 00:15:12.091 "data_offset": 256, 00:15:12.091 "data_size": 7936 00:15:12.091 } 00:15:12.091 ] 00:15:12.091 }' 00:15:12.091 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.091 23:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.350 [2024-11-02 23:55:06.348818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.350 [2024-11-02 23:55:06.348855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.350 [2024-11-02 23:55:06.348968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.350 [2024-11-02 23:55:06.349051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.350 [2024-11-02 23:55:06.349066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.350 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:12.610 /dev/nbd0 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:12.610 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.611 1+0 records in 00:15:12.611 1+0 records out 00:15:12.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379917 s, 10.8 MB/s 00:15:12.611 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.611 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:15:12.611 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.611 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:12.611 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:15:12.611 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.611 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.611 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:12.871 /dev/nbd1 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.871 1+0 records in 00:15:12.871 1+0 records out 00:15:12.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352306 s, 11.6 MB/s 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.871 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:13.130 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:13.130 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.130 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.130 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.130 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:13.130 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.130 23:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:13.130 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:13.130 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:13.130 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:13.130 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.130 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.130 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:13.130 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:13.130 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.130 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.130 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.389 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.389 [2024-11-02 23:55:07.406717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:13.389 [2024-11-02 23:55:07.406805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.389 [2024-11-02 23:55:07.406832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:13.390 [2024-11-02 23:55:07.406850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.390 [2024-11-02 23:55:07.409264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.390 [2024-11-02 23:55:07.409311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:13.390 [2024-11-02 23:55:07.409410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:13.390 [2024-11-02 23:55:07.409471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.390 [2024-11-02 23:55:07.409602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.390 spare 00:15:13.390 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.390 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:13.390 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.390 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.649 [2024-11-02 23:55:07.509537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:13.649 [2024-11-02 23:55:07.509566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:13.649 [2024-11-02 23:55:07.509873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:13.649 [2024-11-02 23:55:07.510050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:13.649 [2024-11-02 23:55:07.510071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:13.649 [2024-11-02 23:55:07.510214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.649 "name": "raid_bdev1", 00:15:13.649 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:13.649 "strip_size_kb": 0, 00:15:13.649 "state": "online", 00:15:13.649 "raid_level": "raid1", 00:15:13.649 "superblock": true, 00:15:13.649 "num_base_bdevs": 2, 00:15:13.649 "num_base_bdevs_discovered": 2, 00:15:13.649 "num_base_bdevs_operational": 2, 00:15:13.649 "base_bdevs_list": [ 00:15:13.649 { 00:15:13.649 "name": "spare", 00:15:13.649 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:13.649 "is_configured": true, 00:15:13.649 "data_offset": 256, 00:15:13.649 "data_size": 7936 00:15:13.649 }, 00:15:13.649 { 00:15:13.649 "name": "BaseBdev2", 00:15:13.649 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:13.649 "is_configured": true, 00:15:13.649 "data_offset": 256, 00:15:13.649 "data_size": 7936 00:15:13.649 } 00:15:13.649 ] 00:15:13.649 }' 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.649 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.908 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.908 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.908 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.908 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.908 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.908 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.908 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.908 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.908 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.908 23:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.168 "name": "raid_bdev1", 00:15:14.168 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:14.168 "strip_size_kb": 0, 00:15:14.168 "state": "online", 00:15:14.168 "raid_level": "raid1", 00:15:14.168 "superblock": true, 00:15:14.168 "num_base_bdevs": 2, 00:15:14.168 "num_base_bdevs_discovered": 2, 00:15:14.168 "num_base_bdevs_operational": 2, 00:15:14.168 "base_bdevs_list": [ 00:15:14.168 { 00:15:14.168 "name": "spare", 00:15:14.168 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:14.168 "is_configured": true, 00:15:14.168 "data_offset": 256, 00:15:14.168 "data_size": 7936 00:15:14.168 }, 00:15:14.168 { 00:15:14.168 "name": "BaseBdev2", 00:15:14.168 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:14.168 "is_configured": true, 00:15:14.168 "data_offset": 256, 00:15:14.168 "data_size": 7936 00:15:14.168 } 00:15:14.168 ] 00:15:14.168 }' 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.168 [2024-11-02 23:55:08.153562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.168 "name": "raid_bdev1", 00:15:14.168 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:14.168 "strip_size_kb": 0, 00:15:14.168 "state": "online", 00:15:14.168 "raid_level": "raid1", 00:15:14.168 "superblock": true, 00:15:14.168 "num_base_bdevs": 2, 00:15:14.168 "num_base_bdevs_discovered": 1, 00:15:14.168 "num_base_bdevs_operational": 1, 00:15:14.168 "base_bdevs_list": [ 00:15:14.168 { 00:15:14.168 "name": null, 00:15:14.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.168 "is_configured": false, 00:15:14.168 "data_offset": 0, 00:15:14.168 "data_size": 7936 00:15:14.168 }, 00:15:14.168 { 00:15:14.168 "name": "BaseBdev2", 00:15:14.168 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:14.168 "is_configured": true, 00:15:14.168 "data_offset": 256, 00:15:14.168 "data_size": 7936 00:15:14.168 } 00:15:14.168 ] 00:15:14.168 }' 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.168 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.736 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.736 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.736 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.736 [2024-11-02 23:55:08.636800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.736 [2024-11-02 23:55:08.636943] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:14.736 [2024-11-02 23:55:08.636965] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:14.736 [2024-11-02 23:55:08.637027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.736 [2024-11-02 23:55:08.645614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:14.736 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.736 23:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:14.736 [2024-11-02 23:55:08.647803] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.673 "name": "raid_bdev1", 00:15:15.673 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:15.673 "strip_size_kb": 0, 00:15:15.673 "state": "online", 00:15:15.673 "raid_level": "raid1", 00:15:15.673 "superblock": true, 00:15:15.673 "num_base_bdevs": 2, 00:15:15.673 "num_base_bdevs_discovered": 2, 00:15:15.673 "num_base_bdevs_operational": 2, 00:15:15.673 "process": { 00:15:15.673 "type": "rebuild", 00:15:15.673 "target": "spare", 00:15:15.673 "progress": { 00:15:15.673 "blocks": 2560, 00:15:15.673 "percent": 32 00:15:15.673 } 00:15:15.673 }, 00:15:15.673 "base_bdevs_list": [ 00:15:15.673 { 00:15:15.673 "name": "spare", 00:15:15.673 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:15.673 "is_configured": true, 00:15:15.673 "data_offset": 256, 00:15:15.673 "data_size": 7936 00:15:15.673 }, 00:15:15.673 { 00:15:15.673 "name": "BaseBdev2", 00:15:15.673 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:15.673 "is_configured": true, 00:15:15.673 "data_offset": 256, 00:15:15.673 "data_size": 7936 00:15:15.673 } 00:15:15.673 ] 00:15:15.673 }' 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.673 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.932 [2024-11-02 23:55:09.787337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.932 [2024-11-02 23:55:09.853426] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:15.932 [2024-11-02 23:55:09.853492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.932 [2024-11-02 23:55:09.853509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.932 [2024-11-02 23:55:09.853516] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.932 "name": "raid_bdev1", 00:15:15.932 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:15.932 "strip_size_kb": 0, 00:15:15.932 "state": "online", 00:15:15.932 "raid_level": "raid1", 00:15:15.932 "superblock": true, 00:15:15.932 "num_base_bdevs": 2, 00:15:15.932 "num_base_bdevs_discovered": 1, 00:15:15.932 "num_base_bdevs_operational": 1, 00:15:15.932 "base_bdevs_list": [ 00:15:15.932 { 00:15:15.932 "name": null, 00:15:15.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.932 "is_configured": false, 00:15:15.932 "data_offset": 0, 00:15:15.932 "data_size": 7936 00:15:15.932 }, 00:15:15.932 { 00:15:15.932 "name": "BaseBdev2", 00:15:15.932 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:15.932 "is_configured": true, 00:15:15.932 "data_offset": 256, 00:15:15.932 "data_size": 7936 00:15:15.932 } 00:15:15.932 ] 00:15:15.932 }' 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.932 23:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.501 23:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:16.501 23:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.501 23:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.501 [2024-11-02 23:55:10.313724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.501 [2024-11-02 23:55:10.313818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.501 [2024-11-02 23:55:10.313846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:16.501 [2024-11-02 23:55:10.313856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.501 [2024-11-02 23:55:10.314300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.501 [2024-11-02 23:55:10.314324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.501 [2024-11-02 23:55:10.314408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:16.501 [2024-11-02 23:55:10.314419] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:16.501 [2024-11-02 23:55:10.314435] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:16.501 [2024-11-02 23:55:10.314455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.501 [2024-11-02 23:55:10.319261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:16.501 spare 00:15:16.501 23:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.501 23:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:16.501 [2024-11-02 23:55:10.321132] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.444 "name": "raid_bdev1", 00:15:17.444 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:17.444 "strip_size_kb": 0, 00:15:17.444 "state": "online", 00:15:17.444 "raid_level": "raid1", 00:15:17.444 "superblock": true, 00:15:17.444 "num_base_bdevs": 2, 00:15:17.444 "num_base_bdevs_discovered": 2, 00:15:17.444 "num_base_bdevs_operational": 2, 00:15:17.444 "process": { 00:15:17.444 "type": "rebuild", 00:15:17.444 "target": "spare", 00:15:17.444 "progress": { 00:15:17.444 "blocks": 2560, 00:15:17.444 "percent": 32 00:15:17.444 } 00:15:17.444 }, 00:15:17.444 "base_bdevs_list": [ 00:15:17.444 { 00:15:17.444 "name": "spare", 00:15:17.444 "uuid": "6693ba13-edcb-5a92-9083-d82d68dcb1f8", 00:15:17.444 "is_configured": true, 00:15:17.444 "data_offset": 256, 00:15:17.444 "data_size": 7936 00:15:17.444 }, 00:15:17.444 { 00:15:17.444 "name": "BaseBdev2", 00:15:17.444 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:17.444 "is_configured": true, 00:15:17.444 "data_offset": 256, 00:15:17.444 "data_size": 7936 00:15:17.444 } 00:15:17.444 ] 00:15:17.444 }' 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.444 [2024-11-02 23:55:11.486252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.444 [2024-11-02 23:55:11.525567] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.444 [2024-11-02 23:55:11.525623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.444 [2024-11-02 23:55:11.525653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.444 [2024-11-02 23:55:11.525662] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.444 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.704 "name": "raid_bdev1", 00:15:17.704 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:17.704 "strip_size_kb": 0, 00:15:17.704 "state": "online", 00:15:17.704 "raid_level": "raid1", 00:15:17.704 "superblock": true, 00:15:17.704 "num_base_bdevs": 2, 00:15:17.704 "num_base_bdevs_discovered": 1, 00:15:17.704 "num_base_bdevs_operational": 1, 00:15:17.704 "base_bdevs_list": [ 00:15:17.704 { 00:15:17.704 "name": null, 00:15:17.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.704 "is_configured": false, 00:15:17.704 "data_offset": 0, 00:15:17.704 "data_size": 7936 00:15:17.704 }, 00:15:17.704 { 00:15:17.704 "name": "BaseBdev2", 00:15:17.704 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:17.704 "is_configured": true, 00:15:17.704 "data_offset": 256, 00:15:17.704 "data_size": 7936 00:15:17.704 } 00:15:17.704 ] 00:15:17.704 }' 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.704 23:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.963 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.963 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.963 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.963 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.963 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.963 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.963 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.963 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.963 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.963 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.223 "name": "raid_bdev1", 00:15:18.223 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:18.223 "strip_size_kb": 0, 00:15:18.223 "state": "online", 00:15:18.223 "raid_level": "raid1", 00:15:18.223 "superblock": true, 00:15:18.223 "num_base_bdevs": 2, 00:15:18.223 "num_base_bdevs_discovered": 1, 00:15:18.223 "num_base_bdevs_operational": 1, 00:15:18.223 "base_bdevs_list": [ 00:15:18.223 { 00:15:18.223 "name": null, 00:15:18.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.223 "is_configured": false, 00:15:18.223 "data_offset": 0, 00:15:18.223 "data_size": 7936 00:15:18.223 }, 00:15:18.223 { 00:15:18.223 "name": "BaseBdev2", 00:15:18.223 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:18.223 "is_configured": true, 00:15:18.223 "data_offset": 256, 00:15:18.223 "data_size": 7936 00:15:18.223 } 00:15:18.223 ] 00:15:18.223 }' 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.223 [2024-11-02 23:55:12.197225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:18.223 [2024-11-02 23:55:12.197277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.223 [2024-11-02 23:55:12.197295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:18.223 [2024-11-02 23:55:12.197306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.223 [2024-11-02 23:55:12.197681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.223 [2024-11-02 23:55:12.197706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.223 [2024-11-02 23:55:12.197789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:18.223 [2024-11-02 23:55:12.197808] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:18.223 [2024-11-02 23:55:12.197815] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:18.223 [2024-11-02 23:55:12.197826] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:18.223 BaseBdev1 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.223 23:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.161 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.421 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.421 "name": "raid_bdev1", 00:15:19.421 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:19.421 "strip_size_kb": 0, 00:15:19.421 "state": "online", 00:15:19.421 "raid_level": "raid1", 00:15:19.421 "superblock": true, 00:15:19.421 "num_base_bdevs": 2, 00:15:19.421 "num_base_bdevs_discovered": 1, 00:15:19.421 "num_base_bdevs_operational": 1, 00:15:19.421 "base_bdevs_list": [ 00:15:19.421 { 00:15:19.421 "name": null, 00:15:19.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.421 "is_configured": false, 00:15:19.421 "data_offset": 0, 00:15:19.421 "data_size": 7936 00:15:19.421 }, 00:15:19.421 { 00:15:19.421 "name": "BaseBdev2", 00:15:19.421 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:19.421 "is_configured": true, 00:15:19.421 "data_offset": 256, 00:15:19.421 "data_size": 7936 00:15:19.421 } 00:15:19.421 ] 00:15:19.421 }' 00:15:19.421 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.421 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.680 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.680 "name": "raid_bdev1", 00:15:19.680 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:19.680 "strip_size_kb": 0, 00:15:19.680 "state": "online", 00:15:19.680 "raid_level": "raid1", 00:15:19.680 "superblock": true, 00:15:19.680 "num_base_bdevs": 2, 00:15:19.680 "num_base_bdevs_discovered": 1, 00:15:19.680 "num_base_bdevs_operational": 1, 00:15:19.680 "base_bdevs_list": [ 00:15:19.680 { 00:15:19.680 "name": null, 00:15:19.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.680 "is_configured": false, 00:15:19.680 "data_offset": 0, 00:15:19.680 "data_size": 7936 00:15:19.680 }, 00:15:19.680 { 00:15:19.680 "name": "BaseBdev2", 00:15:19.680 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:19.680 "is_configured": true, 00:15:19.680 "data_offset": 256, 00:15:19.680 "data_size": 7936 00:15:19.680 } 00:15:19.680 ] 00:15:19.680 }' 00:15:19.681 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.681 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.681 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.940 [2024-11-02 23:55:13.794567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.940 [2024-11-02 23:55:13.794772] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.940 [2024-11-02 23:55:13.794793] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:19.940 request: 00:15:19.940 { 00:15:19.940 "base_bdev": "BaseBdev1", 00:15:19.940 "raid_bdev": "raid_bdev1", 00:15:19.940 "method": "bdev_raid_add_base_bdev", 00:15:19.940 "req_id": 1 00:15:19.940 } 00:15:19.940 Got JSON-RPC error response 00:15:19.940 response: 00:15:19.940 { 00:15:19.940 "code": -22, 00:15:19.940 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:19.940 } 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:19.940 23:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.878 "name": "raid_bdev1", 00:15:20.878 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:20.878 "strip_size_kb": 0, 00:15:20.878 "state": "online", 00:15:20.878 "raid_level": "raid1", 00:15:20.878 "superblock": true, 00:15:20.878 "num_base_bdevs": 2, 00:15:20.878 "num_base_bdevs_discovered": 1, 00:15:20.878 "num_base_bdevs_operational": 1, 00:15:20.878 "base_bdevs_list": [ 00:15:20.878 { 00:15:20.878 "name": null, 00:15:20.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.878 "is_configured": false, 00:15:20.878 "data_offset": 0, 00:15:20.878 "data_size": 7936 00:15:20.878 }, 00:15:20.878 { 00:15:20.878 "name": "BaseBdev2", 00:15:20.878 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:20.878 "is_configured": true, 00:15:20.878 "data_offset": 256, 00:15:20.878 "data_size": 7936 00:15:20.878 } 00:15:20.878 ] 00:15:20.878 }' 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.878 23:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.446 "name": "raid_bdev1", 00:15:21.446 "uuid": "273c6b84-9e32-46cf-a058-21eedf74ee03", 00:15:21.446 "strip_size_kb": 0, 00:15:21.446 "state": "online", 00:15:21.446 "raid_level": "raid1", 00:15:21.446 "superblock": true, 00:15:21.446 "num_base_bdevs": 2, 00:15:21.446 "num_base_bdevs_discovered": 1, 00:15:21.446 "num_base_bdevs_operational": 1, 00:15:21.446 "base_bdevs_list": [ 00:15:21.446 { 00:15:21.446 "name": null, 00:15:21.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.446 "is_configured": false, 00:15:21.446 "data_offset": 0, 00:15:21.446 "data_size": 7936 00:15:21.446 }, 00:15:21.446 { 00:15:21.446 "name": "BaseBdev2", 00:15:21.446 "uuid": "fc5a35fb-0c47-50b0-b22c-966e2e80ec56", 00:15:21.446 "is_configured": true, 00:15:21.446 "data_offset": 256, 00:15:21.446 "data_size": 7936 00:15:21.446 } 00:15:21.446 ] 00:15:21.446 }' 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96649 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 96649 ']' 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 96649 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96649 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:21.446 killing process with pid 96649 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96649' 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 96649 00:15:21.446 Received shutdown signal, test time was about 60.000000 seconds 00:15:21.446 00:15:21.446 Latency(us) 00:15:21.446 [2024-11-02T23:55:15.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.446 [2024-11-02T23:55:15.541Z] =================================================================================================================== 00:15:21.446 [2024-11-02T23:55:15.541Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:21.446 [2024-11-02 23:55:15.413532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.446 [2024-11-02 23:55:15.413662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.446 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 96649 00:15:21.446 [2024-11-02 23:55:15.413721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.446 [2024-11-02 23:55:15.413733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:21.446 [2024-11-02 23:55:15.443889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.706 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:21.706 00:15:21.706 real 0m18.427s 00:15:21.706 user 0m24.508s 00:15:21.706 sys 0m2.692s 00:15:21.706 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:21.706 23:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.706 ************************************ 00:15:21.706 END TEST raid_rebuild_test_sb_4k 00:15:21.706 ************************************ 00:15:21.706 23:55:15 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:21.706 23:55:15 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:21.706 23:55:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:21.706 23:55:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:21.707 23:55:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.707 ************************************ 00:15:21.707 START TEST raid_state_function_test_sb_md_separate 00:15:21.707 ************************************ 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97342 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97342' 00:15:21.707 Process raid pid: 97342 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97342 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 97342 ']' 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:21.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:21.707 23:55:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.967 [2024-11-02 23:55:15.825213] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:15:21.967 [2024-11-02 23:55:15.825330] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.967 [2024-11-02 23:55:15.984177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.967 [2024-11-02 23:55:16.010248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.967 [2024-11-02 23:55:16.052794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.967 [2024-11-02 23:55:16.052830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.904 [2024-11-02 23:55:16.650251] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.904 [2024-11-02 23:55:16.650300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.904 [2024-11-02 23:55:16.650312] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.904 [2024-11-02 23:55:16.650322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.904 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.905 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.905 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.905 "name": "Existed_Raid", 00:15:22.905 "uuid": "97c0b199-3880-4ae0-b776-d6e79c4e7f8c", 00:15:22.905 "strip_size_kb": 0, 00:15:22.905 "state": "configuring", 00:15:22.905 "raid_level": "raid1", 00:15:22.905 "superblock": true, 00:15:22.905 "num_base_bdevs": 2, 00:15:22.905 "num_base_bdevs_discovered": 0, 00:15:22.905 "num_base_bdevs_operational": 2, 00:15:22.905 "base_bdevs_list": [ 00:15:22.905 { 00:15:22.905 "name": "BaseBdev1", 00:15:22.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.905 "is_configured": false, 00:15:22.905 "data_offset": 0, 00:15:22.905 "data_size": 0 00:15:22.905 }, 00:15:22.905 { 00:15:22.905 "name": "BaseBdev2", 00:15:22.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.905 "is_configured": false, 00:15:22.905 "data_offset": 0, 00:15:22.905 "data_size": 0 00:15:22.905 } 00:15:22.905 ] 00:15:22.905 }' 00:15:22.905 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.905 23:55:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.164 [2024-11-02 23:55:17.065463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.164 [2024-11-02 23:55:17.065505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.164 [2024-11-02 23:55:17.073449] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.164 [2024-11-02 23:55:17.073488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.164 [2024-11-02 23:55:17.073496] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.164 [2024-11-02 23:55:17.073515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.164 [2024-11-02 23:55:17.094898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.164 BaseBdev1 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.164 [ 00:15:23.164 { 00:15:23.164 "name": "BaseBdev1", 00:15:23.164 "aliases": [ 00:15:23.164 "5450c019-3eb1-48cd-82b0-0ab6dc1f0a4d" 00:15:23.164 ], 00:15:23.164 "product_name": "Malloc disk", 00:15:23.164 "block_size": 4096, 00:15:23.164 "num_blocks": 8192, 00:15:23.164 "uuid": "5450c019-3eb1-48cd-82b0-0ab6dc1f0a4d", 00:15:23.164 "md_size": 32, 00:15:23.164 "md_interleave": false, 00:15:23.164 "dif_type": 0, 00:15:23.164 "assigned_rate_limits": { 00:15:23.164 "rw_ios_per_sec": 0, 00:15:23.164 "rw_mbytes_per_sec": 0, 00:15:23.164 "r_mbytes_per_sec": 0, 00:15:23.164 "w_mbytes_per_sec": 0 00:15:23.164 }, 00:15:23.164 "claimed": true, 00:15:23.164 "claim_type": "exclusive_write", 00:15:23.164 "zoned": false, 00:15:23.164 "supported_io_types": { 00:15:23.164 "read": true, 00:15:23.164 "write": true, 00:15:23.164 "unmap": true, 00:15:23.164 "flush": true, 00:15:23.164 "reset": true, 00:15:23.164 "nvme_admin": false, 00:15:23.164 "nvme_io": false, 00:15:23.164 "nvme_io_md": false, 00:15:23.164 "write_zeroes": true, 00:15:23.164 "zcopy": true, 00:15:23.164 "get_zone_info": false, 00:15:23.164 "zone_management": false, 00:15:23.164 "zone_append": false, 00:15:23.164 "compare": false, 00:15:23.164 "compare_and_write": false, 00:15:23.164 "abort": true, 00:15:23.164 "seek_hole": false, 00:15:23.164 "seek_data": false, 00:15:23.164 "copy": true, 00:15:23.164 "nvme_iov_md": false 00:15:23.164 }, 00:15:23.164 "memory_domains": [ 00:15:23.164 { 00:15:23.164 "dma_device_id": "system", 00:15:23.164 "dma_device_type": 1 00:15:23.164 }, 00:15:23.164 { 00:15:23.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.164 "dma_device_type": 2 00:15:23.164 } 00:15:23.164 ], 00:15:23.164 "driver_specific": {} 00:15:23.164 } 00:15:23.164 ] 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.164 "name": "Existed_Raid", 00:15:23.164 "uuid": "a64744c2-de5b-4e4b-929a-df7d6c601f7e", 00:15:23.164 "strip_size_kb": 0, 00:15:23.164 "state": "configuring", 00:15:23.164 "raid_level": "raid1", 00:15:23.164 "superblock": true, 00:15:23.164 "num_base_bdevs": 2, 00:15:23.164 "num_base_bdevs_discovered": 1, 00:15:23.164 "num_base_bdevs_operational": 2, 00:15:23.164 "base_bdevs_list": [ 00:15:23.164 { 00:15:23.164 "name": "BaseBdev1", 00:15:23.164 "uuid": "5450c019-3eb1-48cd-82b0-0ab6dc1f0a4d", 00:15:23.164 "is_configured": true, 00:15:23.164 "data_offset": 256, 00:15:23.164 "data_size": 7936 00:15:23.164 }, 00:15:23.164 { 00:15:23.164 "name": "BaseBdev2", 00:15:23.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.164 "is_configured": false, 00:15:23.164 "data_offset": 0, 00:15:23.164 "data_size": 0 00:15:23.164 } 00:15:23.164 ] 00:15:23.164 }' 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.164 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.733 [2024-11-02 23:55:17.590107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.733 [2024-11-02 23:55:17.590148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.733 [2024-11-02 23:55:17.602135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.733 [2024-11-02 23:55:17.603924] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.733 [2024-11-02 23:55:17.603961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.733 "name": "Existed_Raid", 00:15:23.733 "uuid": "9ad9d63c-1774-4771-9874-a4b378363cc6", 00:15:23.733 "strip_size_kb": 0, 00:15:23.733 "state": "configuring", 00:15:23.733 "raid_level": "raid1", 00:15:23.733 "superblock": true, 00:15:23.733 "num_base_bdevs": 2, 00:15:23.733 "num_base_bdevs_discovered": 1, 00:15:23.733 "num_base_bdevs_operational": 2, 00:15:23.733 "base_bdevs_list": [ 00:15:23.733 { 00:15:23.733 "name": "BaseBdev1", 00:15:23.733 "uuid": "5450c019-3eb1-48cd-82b0-0ab6dc1f0a4d", 00:15:23.733 "is_configured": true, 00:15:23.733 "data_offset": 256, 00:15:23.733 "data_size": 7936 00:15:23.733 }, 00:15:23.733 { 00:15:23.733 "name": "BaseBdev2", 00:15:23.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.733 "is_configured": false, 00:15:23.733 "data_offset": 0, 00:15:23.733 "data_size": 0 00:15:23.733 } 00:15:23.733 ] 00:15:23.733 }' 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.733 23:55:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.993 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:23.993 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.993 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.274 [2024-11-02 23:55:18.097064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.274 [2024-11-02 23:55:18.097261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:24.274 [2024-11-02 23:55:18.097275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:24.274 [2024-11-02 23:55:18.097364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:24.274 [2024-11-02 23:55:18.097469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:24.274 [2024-11-02 23:55:18.097498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:24.274 [2024-11-02 23:55:18.097577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.274 BaseBdev2 00:15:24.274 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.274 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:24.274 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:24.274 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.275 [ 00:15:24.275 { 00:15:24.275 "name": "BaseBdev2", 00:15:24.275 "aliases": [ 00:15:24.275 "685377d6-baea-460e-8708-25affa372a3b" 00:15:24.275 ], 00:15:24.275 "product_name": "Malloc disk", 00:15:24.275 "block_size": 4096, 00:15:24.275 "num_blocks": 8192, 00:15:24.275 "uuid": "685377d6-baea-460e-8708-25affa372a3b", 00:15:24.275 "md_size": 32, 00:15:24.275 "md_interleave": false, 00:15:24.275 "dif_type": 0, 00:15:24.275 "assigned_rate_limits": { 00:15:24.275 "rw_ios_per_sec": 0, 00:15:24.275 "rw_mbytes_per_sec": 0, 00:15:24.275 "r_mbytes_per_sec": 0, 00:15:24.275 "w_mbytes_per_sec": 0 00:15:24.275 }, 00:15:24.275 "claimed": true, 00:15:24.275 "claim_type": "exclusive_write", 00:15:24.275 "zoned": false, 00:15:24.275 "supported_io_types": { 00:15:24.275 "read": true, 00:15:24.275 "write": true, 00:15:24.275 "unmap": true, 00:15:24.275 "flush": true, 00:15:24.275 "reset": true, 00:15:24.275 "nvme_admin": false, 00:15:24.275 "nvme_io": false, 00:15:24.275 "nvme_io_md": false, 00:15:24.275 "write_zeroes": true, 00:15:24.275 "zcopy": true, 00:15:24.275 "get_zone_info": false, 00:15:24.275 "zone_management": false, 00:15:24.275 "zone_append": false, 00:15:24.275 "compare": false, 00:15:24.275 "compare_and_write": false, 00:15:24.275 "abort": true, 00:15:24.275 "seek_hole": false, 00:15:24.275 "seek_data": false, 00:15:24.275 "copy": true, 00:15:24.275 "nvme_iov_md": false 00:15:24.275 }, 00:15:24.275 "memory_domains": [ 00:15:24.275 { 00:15:24.275 "dma_device_id": "system", 00:15:24.275 "dma_device_type": 1 00:15:24.275 }, 00:15:24.275 { 00:15:24.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.275 "dma_device_type": 2 00:15:24.275 } 00:15:24.275 ], 00:15:24.275 "driver_specific": {} 00:15:24.275 } 00:15:24.275 ] 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.275 "name": "Existed_Raid", 00:15:24.275 "uuid": "9ad9d63c-1774-4771-9874-a4b378363cc6", 00:15:24.275 "strip_size_kb": 0, 00:15:24.275 "state": "online", 00:15:24.275 "raid_level": "raid1", 00:15:24.275 "superblock": true, 00:15:24.275 "num_base_bdevs": 2, 00:15:24.275 "num_base_bdevs_discovered": 2, 00:15:24.275 "num_base_bdevs_operational": 2, 00:15:24.275 "base_bdevs_list": [ 00:15:24.275 { 00:15:24.275 "name": "BaseBdev1", 00:15:24.275 "uuid": "5450c019-3eb1-48cd-82b0-0ab6dc1f0a4d", 00:15:24.275 "is_configured": true, 00:15:24.275 "data_offset": 256, 00:15:24.275 "data_size": 7936 00:15:24.275 }, 00:15:24.275 { 00:15:24.275 "name": "BaseBdev2", 00:15:24.275 "uuid": "685377d6-baea-460e-8708-25affa372a3b", 00:15:24.275 "is_configured": true, 00:15:24.275 "data_offset": 256, 00:15:24.275 "data_size": 7936 00:15:24.275 } 00:15:24.275 ] 00:15:24.275 }' 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.275 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.547 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:24.547 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:24.547 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:24.547 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:24.547 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:24.547 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:24.547 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:24.547 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.547 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.547 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:24.547 [2024-11-02 23:55:18.624470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:24.809 "name": "Existed_Raid", 00:15:24.809 "aliases": [ 00:15:24.809 "9ad9d63c-1774-4771-9874-a4b378363cc6" 00:15:24.809 ], 00:15:24.809 "product_name": "Raid Volume", 00:15:24.809 "block_size": 4096, 00:15:24.809 "num_blocks": 7936, 00:15:24.809 "uuid": "9ad9d63c-1774-4771-9874-a4b378363cc6", 00:15:24.809 "md_size": 32, 00:15:24.809 "md_interleave": false, 00:15:24.809 "dif_type": 0, 00:15:24.809 "assigned_rate_limits": { 00:15:24.809 "rw_ios_per_sec": 0, 00:15:24.809 "rw_mbytes_per_sec": 0, 00:15:24.809 "r_mbytes_per_sec": 0, 00:15:24.809 "w_mbytes_per_sec": 0 00:15:24.809 }, 00:15:24.809 "claimed": false, 00:15:24.809 "zoned": false, 00:15:24.809 "supported_io_types": { 00:15:24.809 "read": true, 00:15:24.809 "write": true, 00:15:24.809 "unmap": false, 00:15:24.809 "flush": false, 00:15:24.809 "reset": true, 00:15:24.809 "nvme_admin": false, 00:15:24.809 "nvme_io": false, 00:15:24.809 "nvme_io_md": false, 00:15:24.809 "write_zeroes": true, 00:15:24.809 "zcopy": false, 00:15:24.809 "get_zone_info": false, 00:15:24.809 "zone_management": false, 00:15:24.809 "zone_append": false, 00:15:24.809 "compare": false, 00:15:24.809 "compare_and_write": false, 00:15:24.809 "abort": false, 00:15:24.809 "seek_hole": false, 00:15:24.809 "seek_data": false, 00:15:24.809 "copy": false, 00:15:24.809 "nvme_iov_md": false 00:15:24.809 }, 00:15:24.809 "memory_domains": [ 00:15:24.809 { 00:15:24.809 "dma_device_id": "system", 00:15:24.809 "dma_device_type": 1 00:15:24.809 }, 00:15:24.809 { 00:15:24.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.809 "dma_device_type": 2 00:15:24.809 }, 00:15:24.809 { 00:15:24.809 "dma_device_id": "system", 00:15:24.809 "dma_device_type": 1 00:15:24.809 }, 00:15:24.809 { 00:15:24.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.809 "dma_device_type": 2 00:15:24.809 } 00:15:24.809 ], 00:15:24.809 "driver_specific": { 00:15:24.809 "raid": { 00:15:24.809 "uuid": "9ad9d63c-1774-4771-9874-a4b378363cc6", 00:15:24.809 "strip_size_kb": 0, 00:15:24.809 "state": "online", 00:15:24.809 "raid_level": "raid1", 00:15:24.809 "superblock": true, 00:15:24.809 "num_base_bdevs": 2, 00:15:24.809 "num_base_bdevs_discovered": 2, 00:15:24.809 "num_base_bdevs_operational": 2, 00:15:24.809 "base_bdevs_list": [ 00:15:24.809 { 00:15:24.809 "name": "BaseBdev1", 00:15:24.809 "uuid": "5450c019-3eb1-48cd-82b0-0ab6dc1f0a4d", 00:15:24.809 "is_configured": true, 00:15:24.809 "data_offset": 256, 00:15:24.809 "data_size": 7936 00:15:24.809 }, 00:15:24.809 { 00:15:24.809 "name": "BaseBdev2", 00:15:24.809 "uuid": "685377d6-baea-460e-8708-25affa372a3b", 00:15:24.809 "is_configured": true, 00:15:24.809 "data_offset": 256, 00:15:24.809 "data_size": 7936 00:15:24.809 } 00:15:24.809 ] 00:15:24.809 } 00:15:24.809 } 00:15:24.809 }' 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:24.809 BaseBdev2' 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:24.809 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.810 [2024-11-02 23:55:18.867842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.810 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.069 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.069 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.069 "name": "Existed_Raid", 00:15:25.069 "uuid": "9ad9d63c-1774-4771-9874-a4b378363cc6", 00:15:25.069 "strip_size_kb": 0, 00:15:25.069 "state": "online", 00:15:25.069 "raid_level": "raid1", 00:15:25.069 "superblock": true, 00:15:25.069 "num_base_bdevs": 2, 00:15:25.069 "num_base_bdevs_discovered": 1, 00:15:25.069 "num_base_bdevs_operational": 1, 00:15:25.069 "base_bdevs_list": [ 00:15:25.069 { 00:15:25.069 "name": null, 00:15:25.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.069 "is_configured": false, 00:15:25.069 "data_offset": 0, 00:15:25.069 "data_size": 7936 00:15:25.069 }, 00:15:25.069 { 00:15:25.069 "name": "BaseBdev2", 00:15:25.069 "uuid": "685377d6-baea-460e-8708-25affa372a3b", 00:15:25.069 "is_configured": true, 00:15:25.069 "data_offset": 256, 00:15:25.069 "data_size": 7936 00:15:25.069 } 00:15:25.069 ] 00:15:25.069 }' 00:15:25.069 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.069 23:55:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.329 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:25.329 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.329 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:25.329 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.329 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.329 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.329 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.329 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:25.329 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.330 [2024-11-02 23:55:19.375166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.330 [2024-11-02 23:55:19.375277] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.330 [2024-11-02 23:55:19.387733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.330 [2024-11-02 23:55:19.387790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.330 [2024-11-02 23:55:19.387801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.330 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97342 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 97342 ']' 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 97342 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97342 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:25.590 killing process with pid 97342 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97342' 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 97342 00:15:25.590 [2024-11-02 23:55:19.488869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:25.590 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 97342 00:15:25.590 [2024-11-02 23:55:19.489829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.850 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:25.850 00:15:25.850 real 0m3.982s 00:15:25.850 user 0m6.316s 00:15:25.850 sys 0m0.833s 00:15:25.850 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:25.850 23:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.850 ************************************ 00:15:25.850 END TEST raid_state_function_test_sb_md_separate 00:15:25.850 ************************************ 00:15:25.850 23:55:19 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:25.850 23:55:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:25.850 23:55:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:25.850 23:55:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.850 ************************************ 00:15:25.850 START TEST raid_superblock_test_md_separate 00:15:25.850 ************************************ 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97583 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97583 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 97583 ']' 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:25.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:25.850 23:55:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.850 [2024-11-02 23:55:19.876137] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:15:25.850 [2024-11-02 23:55:19.876370] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97583 ] 00:15:26.110 [2024-11-02 23:55:20.032356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.110 [2024-11-02 23:55:20.057652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.110 [2024-11-02 23:55:20.099890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.110 [2024-11-02 23:55:20.099935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.680 malloc1 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.680 [2024-11-02 23:55:20.718029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:26.680 [2024-11-02 23:55:20.718185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.680 [2024-11-02 23:55:20.718226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:26.680 [2024-11-02 23:55:20.718282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.680 [2024-11-02 23:55:20.720140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.680 [2024-11-02 23:55:20.720216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:26.680 pt1 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:26.680 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.681 malloc2 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.681 [2024-11-02 23:55:20.751113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:26.681 [2024-11-02 23:55:20.751223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.681 [2024-11-02 23:55:20.751255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:26.681 [2024-11-02 23:55:20.751283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.681 [2024-11-02 23:55:20.753164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.681 [2024-11-02 23:55:20.753258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:26.681 pt2 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.681 [2024-11-02 23:55:20.763124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:26.681 [2024-11-02 23:55:20.764995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:26.681 [2024-11-02 23:55:20.765151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:26.681 [2024-11-02 23:55:20.765168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:26.681 [2024-11-02 23:55:20.765249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:26.681 [2024-11-02 23:55:20.765355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:26.681 [2024-11-02 23:55:20.765371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:26.681 [2024-11-02 23:55:20.765455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.681 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.940 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.940 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.940 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.940 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.940 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.940 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.940 "name": "raid_bdev1", 00:15:26.940 "uuid": "ea347ed2-ec6b-40e1-8a23-0e5bc055887c", 00:15:26.941 "strip_size_kb": 0, 00:15:26.941 "state": "online", 00:15:26.941 "raid_level": "raid1", 00:15:26.941 "superblock": true, 00:15:26.941 "num_base_bdevs": 2, 00:15:26.941 "num_base_bdevs_discovered": 2, 00:15:26.941 "num_base_bdevs_operational": 2, 00:15:26.941 "base_bdevs_list": [ 00:15:26.941 { 00:15:26.941 "name": "pt1", 00:15:26.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.941 "is_configured": true, 00:15:26.941 "data_offset": 256, 00:15:26.941 "data_size": 7936 00:15:26.941 }, 00:15:26.941 { 00:15:26.941 "name": "pt2", 00:15:26.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.941 "is_configured": true, 00:15:26.941 "data_offset": 256, 00:15:26.941 "data_size": 7936 00:15:26.941 } 00:15:26.941 ] 00:15:26.941 }' 00:15:26.941 23:55:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.941 23:55:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.200 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:27.200 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:27.200 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.201 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.201 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.201 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.201 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.201 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.201 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.201 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.201 [2024-11-02 23:55:21.226657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.201 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.201 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.201 "name": "raid_bdev1", 00:15:27.201 "aliases": [ 00:15:27.201 "ea347ed2-ec6b-40e1-8a23-0e5bc055887c" 00:15:27.201 ], 00:15:27.201 "product_name": "Raid Volume", 00:15:27.201 "block_size": 4096, 00:15:27.201 "num_blocks": 7936, 00:15:27.201 "uuid": "ea347ed2-ec6b-40e1-8a23-0e5bc055887c", 00:15:27.201 "md_size": 32, 00:15:27.201 "md_interleave": false, 00:15:27.201 "dif_type": 0, 00:15:27.201 "assigned_rate_limits": { 00:15:27.201 "rw_ios_per_sec": 0, 00:15:27.201 "rw_mbytes_per_sec": 0, 00:15:27.201 "r_mbytes_per_sec": 0, 00:15:27.201 "w_mbytes_per_sec": 0 00:15:27.201 }, 00:15:27.201 "claimed": false, 00:15:27.201 "zoned": false, 00:15:27.201 "supported_io_types": { 00:15:27.201 "read": true, 00:15:27.201 "write": true, 00:15:27.201 "unmap": false, 00:15:27.201 "flush": false, 00:15:27.201 "reset": true, 00:15:27.201 "nvme_admin": false, 00:15:27.201 "nvme_io": false, 00:15:27.201 "nvme_io_md": false, 00:15:27.201 "write_zeroes": true, 00:15:27.201 "zcopy": false, 00:15:27.201 "get_zone_info": false, 00:15:27.201 "zone_management": false, 00:15:27.201 "zone_append": false, 00:15:27.201 "compare": false, 00:15:27.201 "compare_and_write": false, 00:15:27.201 "abort": false, 00:15:27.201 "seek_hole": false, 00:15:27.201 "seek_data": false, 00:15:27.201 "copy": false, 00:15:27.201 "nvme_iov_md": false 00:15:27.201 }, 00:15:27.201 "memory_domains": [ 00:15:27.201 { 00:15:27.201 "dma_device_id": "system", 00:15:27.201 "dma_device_type": 1 00:15:27.201 }, 00:15:27.201 { 00:15:27.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.201 "dma_device_type": 2 00:15:27.201 }, 00:15:27.201 { 00:15:27.201 "dma_device_id": "system", 00:15:27.201 "dma_device_type": 1 00:15:27.201 }, 00:15:27.201 { 00:15:27.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.201 "dma_device_type": 2 00:15:27.201 } 00:15:27.201 ], 00:15:27.201 "driver_specific": { 00:15:27.201 "raid": { 00:15:27.201 "uuid": "ea347ed2-ec6b-40e1-8a23-0e5bc055887c", 00:15:27.201 "strip_size_kb": 0, 00:15:27.201 "state": "online", 00:15:27.201 "raid_level": "raid1", 00:15:27.201 "superblock": true, 00:15:27.201 "num_base_bdevs": 2, 00:15:27.201 "num_base_bdevs_discovered": 2, 00:15:27.201 "num_base_bdevs_operational": 2, 00:15:27.201 "base_bdevs_list": [ 00:15:27.201 { 00:15:27.201 "name": "pt1", 00:15:27.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.201 "is_configured": true, 00:15:27.201 "data_offset": 256, 00:15:27.201 "data_size": 7936 00:15:27.201 }, 00:15:27.201 { 00:15:27.201 "name": "pt2", 00:15:27.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.201 "is_configured": true, 00:15:27.201 "data_offset": 256, 00:15:27.201 "data_size": 7936 00:15:27.201 } 00:15:27.201 ] 00:15:27.201 } 00:15:27.201 } 00:15:27.201 }' 00:15:27.201 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:27.461 pt2' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.461 [2024-11-02 23:55:21.454163] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ea347ed2-ec6b-40e1-8a23-0e5bc055887c 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z ea347ed2-ec6b-40e1-8a23-0e5bc055887c ']' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.461 [2024-11-02 23:55:21.481897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.461 [2024-11-02 23:55:21.481928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.461 [2024-11-02 23:55:21.482012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.461 [2024-11-02 23:55:21.482069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.461 [2024-11-02 23:55:21.482078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.461 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.723 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.723 [2024-11-02 23:55:21.621665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:27.723 [2024-11-02 23:55:21.623574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:27.723 [2024-11-02 23:55:21.623639] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:27.723 [2024-11-02 23:55:21.623681] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:27.723 [2024-11-02 23:55:21.623697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.724 [2024-11-02 23:55:21.623706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:27.724 request: 00:15:27.724 { 00:15:27.724 "name": "raid_bdev1", 00:15:27.724 "raid_level": "raid1", 00:15:27.724 "base_bdevs": [ 00:15:27.724 "malloc1", 00:15:27.724 "malloc2" 00:15:27.724 ], 00:15:27.724 "superblock": false, 00:15:27.724 "method": "bdev_raid_create", 00:15:27.724 "req_id": 1 00:15:27.724 } 00:15:27.724 Got JSON-RPC error response 00:15:27.724 response: 00:15:27.724 { 00:15:27.724 "code": -17, 00:15:27.724 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:27.724 } 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.724 [2024-11-02 23:55:21.685517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:27.724 [2024-11-02 23:55:21.685614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.724 [2024-11-02 23:55:21.685667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:27.724 [2024-11-02 23:55:21.685695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.724 [2024-11-02 23:55:21.687604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.724 [2024-11-02 23:55:21.687670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:27.724 [2024-11-02 23:55:21.687733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:27.724 [2024-11-02 23:55:21.687804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:27.724 pt1 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.724 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.725 "name": "raid_bdev1", 00:15:27.725 "uuid": "ea347ed2-ec6b-40e1-8a23-0e5bc055887c", 00:15:27.725 "strip_size_kb": 0, 00:15:27.725 "state": "configuring", 00:15:27.725 "raid_level": "raid1", 00:15:27.725 "superblock": true, 00:15:27.725 "num_base_bdevs": 2, 00:15:27.725 "num_base_bdevs_discovered": 1, 00:15:27.725 "num_base_bdevs_operational": 2, 00:15:27.725 "base_bdevs_list": [ 00:15:27.725 { 00:15:27.725 "name": "pt1", 00:15:27.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.725 "is_configured": true, 00:15:27.725 "data_offset": 256, 00:15:27.725 "data_size": 7936 00:15:27.725 }, 00:15:27.725 { 00:15:27.725 "name": null, 00:15:27.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.725 "is_configured": false, 00:15:27.725 "data_offset": 256, 00:15:27.725 "data_size": 7936 00:15:27.725 } 00:15:27.725 ] 00:15:27.725 }' 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.725 23:55:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.296 [2024-11-02 23:55:22.168690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:28.296 [2024-11-02 23:55:22.168826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.296 [2024-11-02 23:55:22.168849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:28.296 [2024-11-02 23:55:22.168858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.296 [2024-11-02 23:55:22.169023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.296 [2024-11-02 23:55:22.169037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:28.296 [2024-11-02 23:55:22.169080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:28.296 [2024-11-02 23:55:22.169098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:28.296 [2024-11-02 23:55:22.169184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:28.296 [2024-11-02 23:55:22.169193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:28.296 [2024-11-02 23:55:22.169267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:28.296 [2024-11-02 23:55:22.169348] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:28.296 [2024-11-02 23:55:22.169361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:28.296 [2024-11-02 23:55:22.169423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.296 pt2 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.296 "name": "raid_bdev1", 00:15:28.296 "uuid": "ea347ed2-ec6b-40e1-8a23-0e5bc055887c", 00:15:28.296 "strip_size_kb": 0, 00:15:28.296 "state": "online", 00:15:28.296 "raid_level": "raid1", 00:15:28.296 "superblock": true, 00:15:28.296 "num_base_bdevs": 2, 00:15:28.296 "num_base_bdevs_discovered": 2, 00:15:28.296 "num_base_bdevs_operational": 2, 00:15:28.296 "base_bdevs_list": [ 00:15:28.296 { 00:15:28.296 "name": "pt1", 00:15:28.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.296 "is_configured": true, 00:15:28.296 "data_offset": 256, 00:15:28.296 "data_size": 7936 00:15:28.296 }, 00:15:28.296 { 00:15:28.296 "name": "pt2", 00:15:28.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.296 "is_configured": true, 00:15:28.296 "data_offset": 256, 00:15:28.296 "data_size": 7936 00:15:28.296 } 00:15:28.296 ] 00:15:28.296 }' 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.296 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.555 [2024-11-02 23:55:22.604234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.555 "name": "raid_bdev1", 00:15:28.555 "aliases": [ 00:15:28.555 "ea347ed2-ec6b-40e1-8a23-0e5bc055887c" 00:15:28.555 ], 00:15:28.555 "product_name": "Raid Volume", 00:15:28.555 "block_size": 4096, 00:15:28.555 "num_blocks": 7936, 00:15:28.555 "uuid": "ea347ed2-ec6b-40e1-8a23-0e5bc055887c", 00:15:28.555 "md_size": 32, 00:15:28.555 "md_interleave": false, 00:15:28.555 "dif_type": 0, 00:15:28.555 "assigned_rate_limits": { 00:15:28.555 "rw_ios_per_sec": 0, 00:15:28.555 "rw_mbytes_per_sec": 0, 00:15:28.555 "r_mbytes_per_sec": 0, 00:15:28.555 "w_mbytes_per_sec": 0 00:15:28.555 }, 00:15:28.555 "claimed": false, 00:15:28.555 "zoned": false, 00:15:28.555 "supported_io_types": { 00:15:28.555 "read": true, 00:15:28.555 "write": true, 00:15:28.555 "unmap": false, 00:15:28.555 "flush": false, 00:15:28.555 "reset": true, 00:15:28.555 "nvme_admin": false, 00:15:28.555 "nvme_io": false, 00:15:28.555 "nvme_io_md": false, 00:15:28.555 "write_zeroes": true, 00:15:28.555 "zcopy": false, 00:15:28.555 "get_zone_info": false, 00:15:28.555 "zone_management": false, 00:15:28.555 "zone_append": false, 00:15:28.555 "compare": false, 00:15:28.555 "compare_and_write": false, 00:15:28.555 "abort": false, 00:15:28.555 "seek_hole": false, 00:15:28.555 "seek_data": false, 00:15:28.555 "copy": false, 00:15:28.555 "nvme_iov_md": false 00:15:28.555 }, 00:15:28.555 "memory_domains": [ 00:15:28.555 { 00:15:28.555 "dma_device_id": "system", 00:15:28.555 "dma_device_type": 1 00:15:28.555 }, 00:15:28.555 { 00:15:28.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.555 "dma_device_type": 2 00:15:28.555 }, 00:15:28.555 { 00:15:28.555 "dma_device_id": "system", 00:15:28.555 "dma_device_type": 1 00:15:28.555 }, 00:15:28.555 { 00:15:28.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.555 "dma_device_type": 2 00:15:28.555 } 00:15:28.555 ], 00:15:28.555 "driver_specific": { 00:15:28.555 "raid": { 00:15:28.555 "uuid": "ea347ed2-ec6b-40e1-8a23-0e5bc055887c", 00:15:28.555 "strip_size_kb": 0, 00:15:28.555 "state": "online", 00:15:28.555 "raid_level": "raid1", 00:15:28.555 "superblock": true, 00:15:28.555 "num_base_bdevs": 2, 00:15:28.555 "num_base_bdevs_discovered": 2, 00:15:28.555 "num_base_bdevs_operational": 2, 00:15:28.555 "base_bdevs_list": [ 00:15:28.555 { 00:15:28.555 "name": "pt1", 00:15:28.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.555 "is_configured": true, 00:15:28.555 "data_offset": 256, 00:15:28.555 "data_size": 7936 00:15:28.555 }, 00:15:28.555 { 00:15:28.555 "name": "pt2", 00:15:28.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.555 "is_configured": true, 00:15:28.555 "data_offset": 256, 00:15:28.555 "data_size": 7936 00:15:28.555 } 00:15:28.555 ] 00:15:28.555 } 00:15:28.555 } 00:15:28.555 }' 00:15:28.555 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.814 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:28.814 pt2' 00:15:28.814 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.814 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:28.815 [2024-11-02 23:55:22.831845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' ea347ed2-ec6b-40e1-8a23-0e5bc055887c '!=' ea347ed2-ec6b-40e1-8a23-0e5bc055887c ']' 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.815 [2024-11-02 23:55:22.879528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.815 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.075 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.075 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.075 "name": "raid_bdev1", 00:15:29.075 "uuid": "ea347ed2-ec6b-40e1-8a23-0e5bc055887c", 00:15:29.075 "strip_size_kb": 0, 00:15:29.075 "state": "online", 00:15:29.075 "raid_level": "raid1", 00:15:29.075 "superblock": true, 00:15:29.075 "num_base_bdevs": 2, 00:15:29.075 "num_base_bdevs_discovered": 1, 00:15:29.075 "num_base_bdevs_operational": 1, 00:15:29.075 "base_bdevs_list": [ 00:15:29.075 { 00:15:29.075 "name": null, 00:15:29.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.075 "is_configured": false, 00:15:29.075 "data_offset": 0, 00:15:29.075 "data_size": 7936 00:15:29.075 }, 00:15:29.075 { 00:15:29.075 "name": "pt2", 00:15:29.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.075 "is_configured": true, 00:15:29.075 "data_offset": 256, 00:15:29.075 "data_size": 7936 00:15:29.075 } 00:15:29.075 ] 00:15:29.075 }' 00:15:29.075 23:55:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.075 23:55:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.335 [2024-11-02 23:55:23.306809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.335 [2024-11-02 23:55:23.306835] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.335 [2024-11-02 23:55:23.306894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.335 [2024-11-02 23:55:23.306940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.335 [2024-11-02 23:55:23.306949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.335 [2024-11-02 23:55:23.382673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:29.335 [2024-11-02 23:55:23.382727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.335 [2024-11-02 23:55:23.382760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:29.335 [2024-11-02 23:55:23.382770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.335 [2024-11-02 23:55:23.384679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.335 [2024-11-02 23:55:23.384755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:29.335 [2024-11-02 23:55:23.384828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:29.335 [2024-11-02 23:55:23.384866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.335 [2024-11-02 23:55:23.384944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:29.335 [2024-11-02 23:55:23.384952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:29.335 [2024-11-02 23:55:23.385015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:29.335 [2024-11-02 23:55:23.385098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:29.335 [2024-11-02 23:55:23.385109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:29.335 [2024-11-02 23:55:23.385168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.335 pt2 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.335 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.595 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.595 "name": "raid_bdev1", 00:15:29.595 "uuid": "ea347ed2-ec6b-40e1-8a23-0e5bc055887c", 00:15:29.595 "strip_size_kb": 0, 00:15:29.595 "state": "online", 00:15:29.595 "raid_level": "raid1", 00:15:29.595 "superblock": true, 00:15:29.595 "num_base_bdevs": 2, 00:15:29.595 "num_base_bdevs_discovered": 1, 00:15:29.595 "num_base_bdevs_operational": 1, 00:15:29.595 "base_bdevs_list": [ 00:15:29.595 { 00:15:29.595 "name": null, 00:15:29.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.595 "is_configured": false, 00:15:29.595 "data_offset": 256, 00:15:29.595 "data_size": 7936 00:15:29.595 }, 00:15:29.595 { 00:15:29.595 "name": "pt2", 00:15:29.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.595 "is_configured": true, 00:15:29.595 "data_offset": 256, 00:15:29.595 "data_size": 7936 00:15:29.595 } 00:15:29.595 ] 00:15:29.595 }' 00:15:29.595 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.595 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.855 [2024-11-02 23:55:23.817929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.855 [2024-11-02 23:55:23.818006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.855 [2024-11-02 23:55:23.818129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.855 [2024-11-02 23:55:23.818208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.855 [2024-11-02 23:55:23.818275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.855 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.855 [2024-11-02 23:55:23.881880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.855 [2024-11-02 23:55:23.881974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.855 [2024-11-02 23:55:23.882008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:29.855 [2024-11-02 23:55:23.882039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.855 [2024-11-02 23:55:23.884017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.855 [2024-11-02 23:55:23.884100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.855 [2024-11-02 23:55:23.884171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:29.855 [2024-11-02 23:55:23.884220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:29.855 [2024-11-02 23:55:23.884355] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:29.855 [2024-11-02 23:55:23.884415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.855 [2024-11-02 23:55:23.884466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:29.855 [2024-11-02 23:55:23.884526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.856 [2024-11-02 23:55:23.884646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:29.856 [2024-11-02 23:55:23.884688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:29.856 [2024-11-02 23:55:23.884770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:29.856 [2024-11-02 23:55:23.884875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:29.856 [2024-11-02 23:55:23.884910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:29.856 [2024-11-02 23:55:23.885020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.856 pt1 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.856 "name": "raid_bdev1", 00:15:29.856 "uuid": "ea347ed2-ec6b-40e1-8a23-0e5bc055887c", 00:15:29.856 "strip_size_kb": 0, 00:15:29.856 "state": "online", 00:15:29.856 "raid_level": "raid1", 00:15:29.856 "superblock": true, 00:15:29.856 "num_base_bdevs": 2, 00:15:29.856 "num_base_bdevs_discovered": 1, 00:15:29.856 "num_base_bdevs_operational": 1, 00:15:29.856 "base_bdevs_list": [ 00:15:29.856 { 00:15:29.856 "name": null, 00:15:29.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.856 "is_configured": false, 00:15:29.856 "data_offset": 256, 00:15:29.856 "data_size": 7936 00:15:29.856 }, 00:15:29.856 { 00:15:29.856 "name": "pt2", 00:15:29.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.856 "is_configured": true, 00:15:29.856 "data_offset": 256, 00:15:29.856 "data_size": 7936 00:15:29.856 } 00:15:29.856 ] 00:15:29.856 }' 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.856 23:55:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.425 [2024-11-02 23:55:24.373219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' ea347ed2-ec6b-40e1-8a23-0e5bc055887c '!=' ea347ed2-ec6b-40e1-8a23-0e5bc055887c ']' 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97583 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 97583 ']' 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 97583 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97583 00:15:30.425 killing process with pid 97583 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97583' 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 97583 00:15:30.425 [2024-11-02 23:55:24.453111] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.425 [2024-11-02 23:55:24.453180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.425 [2024-11-02 23:55:24.453225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.425 [2024-11-02 23:55:24.453233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:30.425 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 97583 00:15:30.425 [2024-11-02 23:55:24.477627] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.685 23:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:15:30.685 00:15:30.685 real 0m4.906s 00:15:30.685 user 0m8.029s 00:15:30.685 sys 0m1.101s 00:15:30.685 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:30.685 23:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.685 ************************************ 00:15:30.685 END TEST raid_superblock_test_md_separate 00:15:30.685 ************************************ 00:15:30.685 23:55:24 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:15:30.685 23:55:24 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:15:30.685 23:55:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:30.685 23:55:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:30.685 23:55:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.685 ************************************ 00:15:30.685 START TEST raid_rebuild_test_sb_md_separate 00:15:30.685 ************************************ 00:15:30.685 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:15:30.685 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:30.685 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:30.685 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:30.685 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:30.685 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:30.945 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=97899 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 97899 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 97899 ']' 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:30.946 23:55:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 [2024-11-02 23:55:24.873957] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:15:30.946 [2024-11-02 23:55:24.874192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:30.946 Zero copy mechanism will not be used. 00:15:30.946 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97899 ] 00:15:30.946 [2024-11-02 23:55:25.030975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.205 [2024-11-02 23:55:25.057016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.205 [2024-11-02 23:55:25.099276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.205 [2024-11-02 23:55:25.099387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.775 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:31.775 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:15:31.775 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.775 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.776 BaseBdev1_malloc 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.776 [2024-11-02 23:55:25.717748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:31.776 [2024-11-02 23:55:25.717811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.776 [2024-11-02 23:55:25.717850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:31.776 [2024-11-02 23:55:25.717869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.776 [2024-11-02 23:55:25.719776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.776 [2024-11-02 23:55:25.719808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:31.776 BaseBdev1 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.776 BaseBdev2_malloc 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.776 [2024-11-02 23:55:25.746840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:31.776 [2024-11-02 23:55:25.746897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.776 [2024-11-02 23:55:25.746933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.776 [2024-11-02 23:55:25.746941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.776 [2024-11-02 23:55:25.748781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.776 [2024-11-02 23:55:25.748877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:31.776 BaseBdev2 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.776 spare_malloc 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.776 spare_delay 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.776 [2024-11-02 23:55:25.800283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.776 [2024-11-02 23:55:25.800342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.776 [2024-11-02 23:55:25.800369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:31.776 [2024-11-02 23:55:25.800380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.776 [2024-11-02 23:55:25.802822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.776 [2024-11-02 23:55:25.802861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.776 spare 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.776 [2024-11-02 23:55:25.812273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.776 [2024-11-02 23:55:25.814078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.776 [2024-11-02 23:55:25.814227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:31.776 [2024-11-02 23:55:25.814240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:31.776 [2024-11-02 23:55:25.814327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:31.776 [2024-11-02 23:55:25.814426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:31.776 [2024-11-02 23:55:25.814452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:31.776 [2024-11-02 23:55:25.814530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.776 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.036 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.036 "name": "raid_bdev1", 00:15:32.036 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:32.036 "strip_size_kb": 0, 00:15:32.036 "state": "online", 00:15:32.036 "raid_level": "raid1", 00:15:32.036 "superblock": true, 00:15:32.036 "num_base_bdevs": 2, 00:15:32.036 "num_base_bdevs_discovered": 2, 00:15:32.036 "num_base_bdevs_operational": 2, 00:15:32.036 "base_bdevs_list": [ 00:15:32.036 { 00:15:32.036 "name": "BaseBdev1", 00:15:32.036 "uuid": "7471fda8-1867-5cfe-8ef6-b1e2270c1c4a", 00:15:32.036 "is_configured": true, 00:15:32.036 "data_offset": 256, 00:15:32.036 "data_size": 7936 00:15:32.036 }, 00:15:32.036 { 00:15:32.036 "name": "BaseBdev2", 00:15:32.036 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:32.036 "is_configured": true, 00:15:32.036 "data_offset": 256, 00:15:32.036 "data_size": 7936 00:15:32.036 } 00:15:32.036 ] 00:15:32.036 }' 00:15:32.036 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.036 23:55:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 [2024-11-02 23:55:26.311663] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:32.562 [2024-11-02 23:55:26.571018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:32.562 /dev/nbd0 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.562 1+0 records in 00:15:32.562 1+0 records out 00:15:32.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602517 s, 6.8 MB/s 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.562 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:32.820 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:15:32.820 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.820 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.820 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:32.820 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:32.820 23:55:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:33.386 7936+0 records in 00:15:33.386 7936+0 records out 00:15:33.386 32505856 bytes (33 MB, 31 MiB) copied, 0.595384 s, 54.6 MB/s 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.386 [2024-11-02 23:55:27.463757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.386 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.386 [2024-11-02 23:55:27.477390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.653 "name": "raid_bdev1", 00:15:33.653 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:33.653 "strip_size_kb": 0, 00:15:33.653 "state": "online", 00:15:33.653 "raid_level": "raid1", 00:15:33.653 "superblock": true, 00:15:33.653 "num_base_bdevs": 2, 00:15:33.653 "num_base_bdevs_discovered": 1, 00:15:33.653 "num_base_bdevs_operational": 1, 00:15:33.653 "base_bdevs_list": [ 00:15:33.653 { 00:15:33.653 "name": null, 00:15:33.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.653 "is_configured": false, 00:15:33.653 "data_offset": 0, 00:15:33.653 "data_size": 7936 00:15:33.653 }, 00:15:33.653 { 00:15:33.653 "name": "BaseBdev2", 00:15:33.653 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:33.653 "is_configured": true, 00:15:33.653 "data_offset": 256, 00:15:33.653 "data_size": 7936 00:15:33.653 } 00:15:33.653 ] 00:15:33.653 }' 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.653 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.912 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:33.912 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.912 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.912 [2024-11-02 23:55:27.940607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.912 [2024-11-02 23:55:27.943233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:33.912 [2024-11-02 23:55:27.945048] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.912 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.912 23:55:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.291 23:55:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.291 "name": "raid_bdev1", 00:15:35.291 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:35.291 "strip_size_kb": 0, 00:15:35.291 "state": "online", 00:15:35.291 "raid_level": "raid1", 00:15:35.291 "superblock": true, 00:15:35.291 "num_base_bdevs": 2, 00:15:35.291 "num_base_bdevs_discovered": 2, 00:15:35.291 "num_base_bdevs_operational": 2, 00:15:35.291 "process": { 00:15:35.291 "type": "rebuild", 00:15:35.291 "target": "spare", 00:15:35.291 "progress": { 00:15:35.291 "blocks": 2560, 00:15:35.291 "percent": 32 00:15:35.291 } 00:15:35.291 }, 00:15:35.291 "base_bdevs_list": [ 00:15:35.291 { 00:15:35.291 "name": "spare", 00:15:35.291 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:35.291 "is_configured": true, 00:15:35.291 "data_offset": 256, 00:15:35.291 "data_size": 7936 00:15:35.291 }, 00:15:35.291 { 00:15:35.291 "name": "BaseBdev2", 00:15:35.291 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:35.291 "is_configured": true, 00:15:35.291 "data_offset": 256, 00:15:35.291 "data_size": 7936 00:15:35.291 } 00:15:35.291 ] 00:15:35.291 }' 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.291 [2024-11-02 23:55:29.107853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.291 [2024-11-02 23:55:29.149637] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:35.291 [2024-11-02 23:55:29.149704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.291 [2024-11-02 23:55:29.149723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.291 [2024-11-02 23:55:29.149730] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.291 "name": "raid_bdev1", 00:15:35.291 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:35.291 "strip_size_kb": 0, 00:15:35.291 "state": "online", 00:15:35.291 "raid_level": "raid1", 00:15:35.291 "superblock": true, 00:15:35.291 "num_base_bdevs": 2, 00:15:35.291 "num_base_bdevs_discovered": 1, 00:15:35.291 "num_base_bdevs_operational": 1, 00:15:35.291 "base_bdevs_list": [ 00:15:35.291 { 00:15:35.291 "name": null, 00:15:35.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.291 "is_configured": false, 00:15:35.291 "data_offset": 0, 00:15:35.291 "data_size": 7936 00:15:35.291 }, 00:15:35.291 { 00:15:35.291 "name": "BaseBdev2", 00:15:35.291 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:35.291 "is_configured": true, 00:15:35.291 "data_offset": 256, 00:15:35.291 "data_size": 7936 00:15:35.291 } 00:15:35.291 ] 00:15:35.291 }' 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.291 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.551 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.551 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.551 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.551 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.551 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.551 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.551 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.551 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.551 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.551 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.810 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.810 "name": "raid_bdev1", 00:15:35.810 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:35.810 "strip_size_kb": 0, 00:15:35.810 "state": "online", 00:15:35.810 "raid_level": "raid1", 00:15:35.810 "superblock": true, 00:15:35.810 "num_base_bdevs": 2, 00:15:35.810 "num_base_bdevs_discovered": 1, 00:15:35.810 "num_base_bdevs_operational": 1, 00:15:35.810 "base_bdevs_list": [ 00:15:35.810 { 00:15:35.810 "name": null, 00:15:35.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.810 "is_configured": false, 00:15:35.810 "data_offset": 0, 00:15:35.810 "data_size": 7936 00:15:35.810 }, 00:15:35.810 { 00:15:35.810 "name": "BaseBdev2", 00:15:35.810 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:35.810 "is_configured": true, 00:15:35.810 "data_offset": 256, 00:15:35.810 "data_size": 7936 00:15:35.810 } 00:15:35.810 ] 00:15:35.810 }' 00:15:35.810 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.810 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.810 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.810 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.810 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.810 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.810 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.811 [2024-11-02 23:55:29.763771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.811 [2024-11-02 23:55:29.766254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:35.811 [2024-11-02 23:55:29.768196] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.811 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.811 23:55:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.753 "name": "raid_bdev1", 00:15:36.753 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:36.753 "strip_size_kb": 0, 00:15:36.753 "state": "online", 00:15:36.753 "raid_level": "raid1", 00:15:36.753 "superblock": true, 00:15:36.753 "num_base_bdevs": 2, 00:15:36.753 "num_base_bdevs_discovered": 2, 00:15:36.753 "num_base_bdevs_operational": 2, 00:15:36.753 "process": { 00:15:36.753 "type": "rebuild", 00:15:36.753 "target": "spare", 00:15:36.753 "progress": { 00:15:36.753 "blocks": 2560, 00:15:36.753 "percent": 32 00:15:36.753 } 00:15:36.753 }, 00:15:36.753 "base_bdevs_list": [ 00:15:36.753 { 00:15:36.753 "name": "spare", 00:15:36.753 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:36.753 "is_configured": true, 00:15:36.753 "data_offset": 256, 00:15:36.753 "data_size": 7936 00:15:36.753 }, 00:15:36.753 { 00:15:36.753 "name": "BaseBdev2", 00:15:36.753 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:36.753 "is_configured": true, 00:15:36.753 "data_offset": 256, 00:15:36.753 "data_size": 7936 00:15:36.753 } 00:15:36.753 ] 00:15:36.753 }' 00:15:36.753 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:37.013 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=589 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.013 "name": "raid_bdev1", 00:15:37.013 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:37.013 "strip_size_kb": 0, 00:15:37.013 "state": "online", 00:15:37.013 "raid_level": "raid1", 00:15:37.013 "superblock": true, 00:15:37.013 "num_base_bdevs": 2, 00:15:37.013 "num_base_bdevs_discovered": 2, 00:15:37.013 "num_base_bdevs_operational": 2, 00:15:37.013 "process": { 00:15:37.013 "type": "rebuild", 00:15:37.013 "target": "spare", 00:15:37.013 "progress": { 00:15:37.013 "blocks": 2816, 00:15:37.013 "percent": 35 00:15:37.013 } 00:15:37.013 }, 00:15:37.013 "base_bdevs_list": [ 00:15:37.013 { 00:15:37.013 "name": "spare", 00:15:37.013 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:37.013 "is_configured": true, 00:15:37.013 "data_offset": 256, 00:15:37.013 "data_size": 7936 00:15:37.013 }, 00:15:37.013 { 00:15:37.013 "name": "BaseBdev2", 00:15:37.013 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:37.013 "is_configured": true, 00:15:37.013 "data_offset": 256, 00:15:37.013 "data_size": 7936 00:15:37.013 } 00:15:37.013 ] 00:15:37.013 }' 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.013 23:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.013 23:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.013 23:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.972 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.972 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.972 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.972 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.972 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.972 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.972 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.972 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.972 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.972 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.232 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.232 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.232 "name": "raid_bdev1", 00:15:38.232 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:38.232 "strip_size_kb": 0, 00:15:38.232 "state": "online", 00:15:38.232 "raid_level": "raid1", 00:15:38.232 "superblock": true, 00:15:38.232 "num_base_bdevs": 2, 00:15:38.232 "num_base_bdevs_discovered": 2, 00:15:38.232 "num_base_bdevs_operational": 2, 00:15:38.232 "process": { 00:15:38.232 "type": "rebuild", 00:15:38.232 "target": "spare", 00:15:38.232 "progress": { 00:15:38.232 "blocks": 5632, 00:15:38.232 "percent": 70 00:15:38.232 } 00:15:38.232 }, 00:15:38.232 "base_bdevs_list": [ 00:15:38.232 { 00:15:38.232 "name": "spare", 00:15:38.232 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:38.232 "is_configured": true, 00:15:38.232 "data_offset": 256, 00:15:38.232 "data_size": 7936 00:15:38.232 }, 00:15:38.232 { 00:15:38.232 "name": "BaseBdev2", 00:15:38.232 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:38.232 "is_configured": true, 00:15:38.232 "data_offset": 256, 00:15:38.232 "data_size": 7936 00:15:38.232 } 00:15:38.232 ] 00:15:38.232 }' 00:15:38.232 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.232 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.232 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.232 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.232 23:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.811 [2024-11-02 23:55:32.878956] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:38.811 [2024-11-02 23:55:32.879086] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:38.811 [2024-11-02 23:55:32.879208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.392 "name": "raid_bdev1", 00:15:39.392 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:39.392 "strip_size_kb": 0, 00:15:39.392 "state": "online", 00:15:39.392 "raid_level": "raid1", 00:15:39.392 "superblock": true, 00:15:39.392 "num_base_bdevs": 2, 00:15:39.392 "num_base_bdevs_discovered": 2, 00:15:39.392 "num_base_bdevs_operational": 2, 00:15:39.392 "base_bdevs_list": [ 00:15:39.392 { 00:15:39.392 "name": "spare", 00:15:39.392 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:39.392 "is_configured": true, 00:15:39.392 "data_offset": 256, 00:15:39.392 "data_size": 7936 00:15:39.392 }, 00:15:39.392 { 00:15:39.392 "name": "BaseBdev2", 00:15:39.392 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:39.392 "is_configured": true, 00:15:39.392 "data_offset": 256, 00:15:39.392 "data_size": 7936 00:15:39.392 } 00:15:39.392 ] 00:15:39.392 }' 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.392 "name": "raid_bdev1", 00:15:39.392 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:39.392 "strip_size_kb": 0, 00:15:39.392 "state": "online", 00:15:39.392 "raid_level": "raid1", 00:15:39.392 "superblock": true, 00:15:39.392 "num_base_bdevs": 2, 00:15:39.392 "num_base_bdevs_discovered": 2, 00:15:39.392 "num_base_bdevs_operational": 2, 00:15:39.392 "base_bdevs_list": [ 00:15:39.392 { 00:15:39.392 "name": "spare", 00:15:39.392 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:39.392 "is_configured": true, 00:15:39.392 "data_offset": 256, 00:15:39.392 "data_size": 7936 00:15:39.392 }, 00:15:39.392 { 00:15:39.392 "name": "BaseBdev2", 00:15:39.392 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:39.392 "is_configured": true, 00:15:39.392 "data_offset": 256, 00:15:39.392 "data_size": 7936 00:15:39.392 } 00:15:39.392 ] 00:15:39.392 }' 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.392 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.652 "name": "raid_bdev1", 00:15:39.652 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:39.652 "strip_size_kb": 0, 00:15:39.652 "state": "online", 00:15:39.652 "raid_level": "raid1", 00:15:39.652 "superblock": true, 00:15:39.652 "num_base_bdevs": 2, 00:15:39.652 "num_base_bdevs_discovered": 2, 00:15:39.652 "num_base_bdevs_operational": 2, 00:15:39.652 "base_bdevs_list": [ 00:15:39.652 { 00:15:39.652 "name": "spare", 00:15:39.652 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:39.652 "is_configured": true, 00:15:39.652 "data_offset": 256, 00:15:39.652 "data_size": 7936 00:15:39.652 }, 00:15:39.652 { 00:15:39.652 "name": "BaseBdev2", 00:15:39.652 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:39.652 "is_configured": true, 00:15:39.652 "data_offset": 256, 00:15:39.652 "data_size": 7936 00:15:39.652 } 00:15:39.652 ] 00:15:39.652 }' 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.652 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.912 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.912 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.912 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.912 [2024-11-02 23:55:33.976381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.912 [2024-11-02 23:55:33.976466] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.912 [2024-11-02 23:55:33.976583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.912 [2024-11-02 23:55:33.976666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.912 [2024-11-02 23:55:33.976766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:39.912 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.912 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.912 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.912 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.912 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:15:39.912 23:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:40.172 /dev/nbd0 00:15:40.172 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.432 1+0 records in 00:15:40.432 1+0 records out 00:15:40.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513081 s, 8.0 MB/s 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:40.432 /dev/nbd1 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:40.432 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.692 1+0 records in 00:15:40.692 1+0 records out 00:15:40.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031077 s, 13.2 MB/s 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.692 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.951 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.951 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.951 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.951 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.951 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.951 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.951 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:40.951 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.951 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.951 23:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.951 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.951 [2024-11-02 23:55:35.040209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:40.951 [2024-11-02 23:55:35.040270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.951 [2024-11-02 23:55:35.040289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:40.951 [2024-11-02 23:55:35.040302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.951 [2024-11-02 23:55:35.042191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.951 [2024-11-02 23:55:35.042286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:40.951 [2024-11-02 23:55:35.042354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:40.951 [2024-11-02 23:55:35.042393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.951 [2024-11-02 23:55:35.042506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.210 spare 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.210 [2024-11-02 23:55:35.142411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:41.210 [2024-11-02 23:55:35.142435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:41.210 [2024-11-02 23:55:35.142530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:41.210 [2024-11-02 23:55:35.142628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:41.210 [2024-11-02 23:55:35.142638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:41.210 [2024-11-02 23:55:35.142735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.210 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.211 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.211 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.211 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.211 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.211 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.211 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.211 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.211 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.211 "name": "raid_bdev1", 00:15:41.211 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:41.211 "strip_size_kb": 0, 00:15:41.211 "state": "online", 00:15:41.211 "raid_level": "raid1", 00:15:41.211 "superblock": true, 00:15:41.211 "num_base_bdevs": 2, 00:15:41.211 "num_base_bdevs_discovered": 2, 00:15:41.211 "num_base_bdevs_operational": 2, 00:15:41.211 "base_bdevs_list": [ 00:15:41.211 { 00:15:41.211 "name": "spare", 00:15:41.211 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:41.211 "is_configured": true, 00:15:41.211 "data_offset": 256, 00:15:41.211 "data_size": 7936 00:15:41.211 }, 00:15:41.211 { 00:15:41.211 "name": "BaseBdev2", 00:15:41.211 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:41.211 "is_configured": true, 00:15:41.211 "data_offset": 256, 00:15:41.211 "data_size": 7936 00:15:41.211 } 00:15:41.211 ] 00:15:41.211 }' 00:15:41.211 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.211 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.780 "name": "raid_bdev1", 00:15:41.780 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:41.780 "strip_size_kb": 0, 00:15:41.780 "state": "online", 00:15:41.780 "raid_level": "raid1", 00:15:41.780 "superblock": true, 00:15:41.780 "num_base_bdevs": 2, 00:15:41.780 "num_base_bdevs_discovered": 2, 00:15:41.780 "num_base_bdevs_operational": 2, 00:15:41.780 "base_bdevs_list": [ 00:15:41.780 { 00:15:41.780 "name": "spare", 00:15:41.780 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:41.780 "is_configured": true, 00:15:41.780 "data_offset": 256, 00:15:41.780 "data_size": 7936 00:15:41.780 }, 00:15:41.780 { 00:15:41.780 "name": "BaseBdev2", 00:15:41.780 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:41.780 "is_configured": true, 00:15:41.780 "data_offset": 256, 00:15:41.780 "data_size": 7936 00:15:41.780 } 00:15:41.780 ] 00:15:41.780 }' 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.780 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.781 [2024-11-02 23:55:35.794936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.781 "name": "raid_bdev1", 00:15:41.781 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:41.781 "strip_size_kb": 0, 00:15:41.781 "state": "online", 00:15:41.781 "raid_level": "raid1", 00:15:41.781 "superblock": true, 00:15:41.781 "num_base_bdevs": 2, 00:15:41.781 "num_base_bdevs_discovered": 1, 00:15:41.781 "num_base_bdevs_operational": 1, 00:15:41.781 "base_bdevs_list": [ 00:15:41.781 { 00:15:41.781 "name": null, 00:15:41.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.781 "is_configured": false, 00:15:41.781 "data_offset": 0, 00:15:41.781 "data_size": 7936 00:15:41.781 }, 00:15:41.781 { 00:15:41.781 "name": "BaseBdev2", 00:15:41.781 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:41.781 "is_configured": true, 00:15:41.781 "data_offset": 256, 00:15:41.781 "data_size": 7936 00:15:41.781 } 00:15:41.781 ] 00:15:41.781 }' 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.781 23:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.350 23:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.350 23:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.350 23:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.350 [2024-11-02 23:55:36.230340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.350 [2024-11-02 23:55:36.230602] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:42.350 [2024-11-02 23:55:36.230691] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:42.350 [2024-11-02 23:55:36.230807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.350 [2024-11-02 23:55:36.233211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:42.350 [2024-11-02 23:55:36.235017] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.350 23:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.350 23:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.289 "name": "raid_bdev1", 00:15:43.289 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:43.289 "strip_size_kb": 0, 00:15:43.289 "state": "online", 00:15:43.289 "raid_level": "raid1", 00:15:43.289 "superblock": true, 00:15:43.289 "num_base_bdevs": 2, 00:15:43.289 "num_base_bdevs_discovered": 2, 00:15:43.289 "num_base_bdevs_operational": 2, 00:15:43.289 "process": { 00:15:43.289 "type": "rebuild", 00:15:43.289 "target": "spare", 00:15:43.289 "progress": { 00:15:43.289 "blocks": 2560, 00:15:43.289 "percent": 32 00:15:43.289 } 00:15:43.289 }, 00:15:43.289 "base_bdevs_list": [ 00:15:43.289 { 00:15:43.289 "name": "spare", 00:15:43.289 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:43.289 "is_configured": true, 00:15:43.289 "data_offset": 256, 00:15:43.289 "data_size": 7936 00:15:43.289 }, 00:15:43.289 { 00:15:43.289 "name": "BaseBdev2", 00:15:43.289 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:43.289 "is_configured": true, 00:15:43.289 "data_offset": 256, 00:15:43.289 "data_size": 7936 00:15:43.289 } 00:15:43.289 ] 00:15:43.289 }' 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.289 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.550 [2024-11-02 23:55:37.397800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.550 [2024-11-02 23:55:37.439069] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.550 [2024-11-02 23:55:37.439118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.550 [2024-11-02 23:55:37.439135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.550 [2024-11-02 23:55:37.439142] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.550 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.550 "name": "raid_bdev1", 00:15:43.550 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:43.550 "strip_size_kb": 0, 00:15:43.550 "state": "online", 00:15:43.550 "raid_level": "raid1", 00:15:43.550 "superblock": true, 00:15:43.551 "num_base_bdevs": 2, 00:15:43.551 "num_base_bdevs_discovered": 1, 00:15:43.551 "num_base_bdevs_operational": 1, 00:15:43.551 "base_bdevs_list": [ 00:15:43.551 { 00:15:43.551 "name": null, 00:15:43.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.551 "is_configured": false, 00:15:43.551 "data_offset": 0, 00:15:43.551 "data_size": 7936 00:15:43.551 }, 00:15:43.551 { 00:15:43.551 "name": "BaseBdev2", 00:15:43.551 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:43.551 "is_configured": true, 00:15:43.551 "data_offset": 256, 00:15:43.551 "data_size": 7936 00:15:43.551 } 00:15:43.551 ] 00:15:43.551 }' 00:15:43.551 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.551 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.811 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.811 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.811 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.811 [2024-11-02 23:55:37.889581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.811 [2024-11-02 23:55:37.889685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.811 [2024-11-02 23:55:37.889739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:43.811 [2024-11-02 23:55:37.889780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.811 [2024-11-02 23:55:37.890014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.811 [2024-11-02 23:55:37.890067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.811 [2024-11-02 23:55:37.890145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:43.812 [2024-11-02 23:55:37.890179] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:43.812 [2024-11-02 23:55:37.890221] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:43.812 [2024-11-02 23:55:37.890304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.812 [2024-11-02 23:55:37.892551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:43.812 [2024-11-02 23:55:37.894375] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.812 spare 00:15:43.812 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.812 23:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.192 "name": "raid_bdev1", 00:15:45.192 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:45.192 "strip_size_kb": 0, 00:15:45.192 "state": "online", 00:15:45.192 "raid_level": "raid1", 00:15:45.192 "superblock": true, 00:15:45.192 "num_base_bdevs": 2, 00:15:45.192 "num_base_bdevs_discovered": 2, 00:15:45.192 "num_base_bdevs_operational": 2, 00:15:45.192 "process": { 00:15:45.192 "type": "rebuild", 00:15:45.192 "target": "spare", 00:15:45.192 "progress": { 00:15:45.192 "blocks": 2560, 00:15:45.192 "percent": 32 00:15:45.192 } 00:15:45.192 }, 00:15:45.192 "base_bdevs_list": [ 00:15:45.192 { 00:15:45.192 "name": "spare", 00:15:45.192 "uuid": "c2d6dec2-3694-52b0-934b-958932b48643", 00:15:45.192 "is_configured": true, 00:15:45.192 "data_offset": 256, 00:15:45.192 "data_size": 7936 00:15:45.192 }, 00:15:45.192 { 00:15:45.192 "name": "BaseBdev2", 00:15:45.192 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:45.192 "is_configured": true, 00:15:45.192 "data_offset": 256, 00:15:45.192 "data_size": 7936 00:15:45.192 } 00:15:45.192 ] 00:15:45.192 }' 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.192 23:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.192 [2024-11-02 23:55:39.053162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.192 [2024-11-02 23:55:39.098394] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:45.192 [2024-11-02 23:55:39.098449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.192 [2024-11-02 23:55:39.098463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.192 [2024-11-02 23:55:39.098471] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.192 "name": "raid_bdev1", 00:15:45.192 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:45.192 "strip_size_kb": 0, 00:15:45.192 "state": "online", 00:15:45.192 "raid_level": "raid1", 00:15:45.192 "superblock": true, 00:15:45.192 "num_base_bdevs": 2, 00:15:45.192 "num_base_bdevs_discovered": 1, 00:15:45.192 "num_base_bdevs_operational": 1, 00:15:45.192 "base_bdevs_list": [ 00:15:45.192 { 00:15:45.192 "name": null, 00:15:45.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.192 "is_configured": false, 00:15:45.192 "data_offset": 0, 00:15:45.192 "data_size": 7936 00:15:45.192 }, 00:15:45.192 { 00:15:45.192 "name": "BaseBdev2", 00:15:45.192 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:45.192 "is_configured": true, 00:15:45.192 "data_offset": 256, 00:15:45.192 "data_size": 7936 00:15:45.192 } 00:15:45.192 ] 00:15:45.192 }' 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.192 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.761 "name": "raid_bdev1", 00:15:45.761 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:45.761 "strip_size_kb": 0, 00:15:45.761 "state": "online", 00:15:45.761 "raid_level": "raid1", 00:15:45.761 "superblock": true, 00:15:45.761 "num_base_bdevs": 2, 00:15:45.761 "num_base_bdevs_discovered": 1, 00:15:45.761 "num_base_bdevs_operational": 1, 00:15:45.761 "base_bdevs_list": [ 00:15:45.761 { 00:15:45.761 "name": null, 00:15:45.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.761 "is_configured": false, 00:15:45.761 "data_offset": 0, 00:15:45.761 "data_size": 7936 00:15:45.761 }, 00:15:45.761 { 00:15:45.761 "name": "BaseBdev2", 00:15:45.761 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:45.761 "is_configured": true, 00:15:45.761 "data_offset": 256, 00:15:45.761 "data_size": 7936 00:15:45.761 } 00:15:45.761 ] 00:15:45.761 }' 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.761 [2024-11-02 23:55:39.744258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:45.761 [2024-11-02 23:55:39.744359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.761 [2024-11-02 23:55:39.744381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:45.761 [2024-11-02 23:55:39.744390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.761 [2024-11-02 23:55:39.744593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.761 [2024-11-02 23:55:39.744611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.761 [2024-11-02 23:55:39.744659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:45.761 [2024-11-02 23:55:39.744677] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:45.761 [2024-11-02 23:55:39.744687] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:45.761 [2024-11-02 23:55:39.744698] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:45.761 BaseBdev1 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.761 23:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.699 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.959 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.959 "name": "raid_bdev1", 00:15:46.959 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:46.959 "strip_size_kb": 0, 00:15:46.959 "state": "online", 00:15:46.959 "raid_level": "raid1", 00:15:46.959 "superblock": true, 00:15:46.959 "num_base_bdevs": 2, 00:15:46.959 "num_base_bdevs_discovered": 1, 00:15:46.959 "num_base_bdevs_operational": 1, 00:15:46.959 "base_bdevs_list": [ 00:15:46.959 { 00:15:46.959 "name": null, 00:15:46.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.959 "is_configured": false, 00:15:46.959 "data_offset": 0, 00:15:46.959 "data_size": 7936 00:15:46.959 }, 00:15:46.959 { 00:15:46.959 "name": "BaseBdev2", 00:15:46.959 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:46.959 "is_configured": true, 00:15:46.959 "data_offset": 256, 00:15:46.959 "data_size": 7936 00:15:46.959 } 00:15:46.959 ] 00:15:46.959 }' 00:15:46.959 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.959 23:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.218 "name": "raid_bdev1", 00:15:47.218 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:47.218 "strip_size_kb": 0, 00:15:47.218 "state": "online", 00:15:47.218 "raid_level": "raid1", 00:15:47.218 "superblock": true, 00:15:47.218 "num_base_bdevs": 2, 00:15:47.218 "num_base_bdevs_discovered": 1, 00:15:47.218 "num_base_bdevs_operational": 1, 00:15:47.218 "base_bdevs_list": [ 00:15:47.218 { 00:15:47.218 "name": null, 00:15:47.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.218 "is_configured": false, 00:15:47.218 "data_offset": 0, 00:15:47.218 "data_size": 7936 00:15:47.218 }, 00:15:47.218 { 00:15:47.218 "name": "BaseBdev2", 00:15:47.218 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:47.218 "is_configured": true, 00:15:47.218 "data_offset": 256, 00:15:47.218 "data_size": 7936 00:15:47.218 } 00:15:47.218 ] 00:15:47.218 }' 00:15:47.218 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.478 [2024-11-02 23:55:41.373782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.478 [2024-11-02 23:55:41.373941] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:47.478 [2024-11-02 23:55:41.373953] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:47.478 request: 00:15:47.478 { 00:15:47.478 "base_bdev": "BaseBdev1", 00:15:47.478 "raid_bdev": "raid_bdev1", 00:15:47.478 "method": "bdev_raid_add_base_bdev", 00:15:47.478 "req_id": 1 00:15:47.478 } 00:15:47.478 Got JSON-RPC error response 00:15:47.478 response: 00:15:47.478 { 00:15:47.478 "code": -22, 00:15:47.478 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:47.478 } 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:47.478 23:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.417 "name": "raid_bdev1", 00:15:48.417 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:48.417 "strip_size_kb": 0, 00:15:48.417 "state": "online", 00:15:48.417 "raid_level": "raid1", 00:15:48.417 "superblock": true, 00:15:48.417 "num_base_bdevs": 2, 00:15:48.417 "num_base_bdevs_discovered": 1, 00:15:48.417 "num_base_bdevs_operational": 1, 00:15:48.417 "base_bdevs_list": [ 00:15:48.417 { 00:15:48.417 "name": null, 00:15:48.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.417 "is_configured": false, 00:15:48.417 "data_offset": 0, 00:15:48.417 "data_size": 7936 00:15:48.417 }, 00:15:48.417 { 00:15:48.417 "name": "BaseBdev2", 00:15:48.417 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:48.417 "is_configured": true, 00:15:48.417 "data_offset": 256, 00:15:48.417 "data_size": 7936 00:15:48.417 } 00:15:48.417 ] 00:15:48.417 }' 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.417 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.991 "name": "raid_bdev1", 00:15:48.991 "uuid": "272c04d2-e891-4504-998e-87b7ddcd34d1", 00:15:48.991 "strip_size_kb": 0, 00:15:48.991 "state": "online", 00:15:48.991 "raid_level": "raid1", 00:15:48.991 "superblock": true, 00:15:48.991 "num_base_bdevs": 2, 00:15:48.991 "num_base_bdevs_discovered": 1, 00:15:48.991 "num_base_bdevs_operational": 1, 00:15:48.991 "base_bdevs_list": [ 00:15:48.991 { 00:15:48.991 "name": null, 00:15:48.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.991 "is_configured": false, 00:15:48.991 "data_offset": 0, 00:15:48.991 "data_size": 7936 00:15:48.991 }, 00:15:48.991 { 00:15:48.991 "name": "BaseBdev2", 00:15:48.991 "uuid": "d468a3f3-bfd5-5bb1-93cd-e58424c94b7a", 00:15:48.991 "is_configured": true, 00:15:48.991 "data_offset": 256, 00:15:48.991 "data_size": 7936 00:15:48.991 } 00:15:48.991 ] 00:15:48.991 }' 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.991 23:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 97899 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 97899 ']' 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 97899 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97899 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:48.991 killing process with pid 97899 00:15:48.991 Received shutdown signal, test time was about 60.000000 seconds 00:15:48.991 00:15:48.991 Latency(us) 00:15:48.991 [2024-11-02T23:55:43.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.991 [2024-11-02T23:55:43.086Z] =================================================================================================================== 00:15:48.991 [2024-11-02T23:55:43.086Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97899' 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 97899 00:15:48.991 [2024-11-02 23:55:43.050834] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.991 [2024-11-02 23:55:43.050964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.991 [2024-11-02 23:55:43.051013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.991 [2024-11-02 23:55:43.051023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:48.991 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 97899 00:15:49.251 [2024-11-02 23:55:43.084084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.251 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:15:49.251 00:15:49.251 real 0m18.504s 00:15:49.251 user 0m24.726s 00:15:49.251 sys 0m2.678s 00:15:49.251 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:49.252 ************************************ 00:15:49.252 END TEST raid_rebuild_test_sb_md_separate 00:15:49.252 ************************************ 00:15:49.252 23:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.252 23:55:43 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:15:49.252 23:55:43 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:15:49.252 23:55:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:49.252 23:55:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:49.252 23:55:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.512 ************************************ 00:15:49.512 START TEST raid_state_function_test_sb_md_interleaved 00:15:49.512 ************************************ 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:49.512 Process raid pid: 98576 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98576 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98576' 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98576 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 98576 ']' 00:15:49.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:49.512 23:55:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.512 [2024-11-02 23:55:43.464472] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:15:49.512 [2024-11-02 23:55:43.464741] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.772 [2024-11-02 23:55:43.622231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.772 [2024-11-02 23:55:43.647863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.772 [2024-11-02 23:55:43.690731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.772 [2024-11-02 23:55:43.690775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.339 [2024-11-02 23:55:44.272338] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.339 [2024-11-02 23:55:44.272458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.339 [2024-11-02 23:55:44.272473] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.339 [2024-11-02 23:55:44.272483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.339 "name": "Existed_Raid", 00:15:50.339 "uuid": "7d568fd8-9e1b-4a96-a16a-5c4659b5fc87", 00:15:50.339 "strip_size_kb": 0, 00:15:50.339 "state": "configuring", 00:15:50.339 "raid_level": "raid1", 00:15:50.339 "superblock": true, 00:15:50.339 "num_base_bdevs": 2, 00:15:50.339 "num_base_bdevs_discovered": 0, 00:15:50.339 "num_base_bdevs_operational": 2, 00:15:50.339 "base_bdevs_list": [ 00:15:50.339 { 00:15:50.339 "name": "BaseBdev1", 00:15:50.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.339 "is_configured": false, 00:15:50.339 "data_offset": 0, 00:15:50.339 "data_size": 0 00:15:50.339 }, 00:15:50.339 { 00:15:50.339 "name": "BaseBdev2", 00:15:50.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.339 "is_configured": false, 00:15:50.339 "data_offset": 0, 00:15:50.339 "data_size": 0 00:15:50.339 } 00:15:50.339 ] 00:15:50.339 }' 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.339 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.908 [2024-11-02 23:55:44.711406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.908 [2024-11-02 23:55:44.711500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.908 [2024-11-02 23:55:44.723403] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.908 [2024-11-02 23:55:44.723482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.908 [2024-11-02 23:55:44.723508] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.908 [2024-11-02 23:55:44.723542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.908 [2024-11-02 23:55:44.744250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.908 BaseBdev1 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.908 [ 00:15:50.908 { 00:15:50.908 "name": "BaseBdev1", 00:15:50.908 "aliases": [ 00:15:50.908 "b1b34d05-8d6a-4d43-b4cf-b7bb52e1f038" 00:15:50.908 ], 00:15:50.908 "product_name": "Malloc disk", 00:15:50.908 "block_size": 4128, 00:15:50.908 "num_blocks": 8192, 00:15:50.908 "uuid": "b1b34d05-8d6a-4d43-b4cf-b7bb52e1f038", 00:15:50.908 "md_size": 32, 00:15:50.908 "md_interleave": true, 00:15:50.908 "dif_type": 0, 00:15:50.908 "assigned_rate_limits": { 00:15:50.908 "rw_ios_per_sec": 0, 00:15:50.908 "rw_mbytes_per_sec": 0, 00:15:50.908 "r_mbytes_per_sec": 0, 00:15:50.908 "w_mbytes_per_sec": 0 00:15:50.908 }, 00:15:50.908 "claimed": true, 00:15:50.908 "claim_type": "exclusive_write", 00:15:50.908 "zoned": false, 00:15:50.908 "supported_io_types": { 00:15:50.908 "read": true, 00:15:50.908 "write": true, 00:15:50.908 "unmap": true, 00:15:50.908 "flush": true, 00:15:50.908 "reset": true, 00:15:50.908 "nvme_admin": false, 00:15:50.908 "nvme_io": false, 00:15:50.908 "nvme_io_md": false, 00:15:50.908 "write_zeroes": true, 00:15:50.908 "zcopy": true, 00:15:50.908 "get_zone_info": false, 00:15:50.908 "zone_management": false, 00:15:50.908 "zone_append": false, 00:15:50.908 "compare": false, 00:15:50.908 "compare_and_write": false, 00:15:50.908 "abort": true, 00:15:50.908 "seek_hole": false, 00:15:50.908 "seek_data": false, 00:15:50.908 "copy": true, 00:15:50.908 "nvme_iov_md": false 00:15:50.908 }, 00:15:50.908 "memory_domains": [ 00:15:50.908 { 00:15:50.908 "dma_device_id": "system", 00:15:50.908 "dma_device_type": 1 00:15:50.908 }, 00:15:50.908 { 00:15:50.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.908 "dma_device_type": 2 00:15:50.908 } 00:15:50.908 ], 00:15:50.908 "driver_specific": {} 00:15:50.908 } 00:15:50.908 ] 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.908 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.909 "name": "Existed_Raid", 00:15:50.909 "uuid": "cfe6911a-cbdf-4427-bfab-b76d9eb2acaa", 00:15:50.909 "strip_size_kb": 0, 00:15:50.909 "state": "configuring", 00:15:50.909 "raid_level": "raid1", 00:15:50.909 "superblock": true, 00:15:50.909 "num_base_bdevs": 2, 00:15:50.909 "num_base_bdevs_discovered": 1, 00:15:50.909 "num_base_bdevs_operational": 2, 00:15:50.909 "base_bdevs_list": [ 00:15:50.909 { 00:15:50.909 "name": "BaseBdev1", 00:15:50.909 "uuid": "b1b34d05-8d6a-4d43-b4cf-b7bb52e1f038", 00:15:50.909 "is_configured": true, 00:15:50.909 "data_offset": 256, 00:15:50.909 "data_size": 7936 00:15:50.909 }, 00:15:50.909 { 00:15:50.909 "name": "BaseBdev2", 00:15:50.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.909 "is_configured": false, 00:15:50.909 "data_offset": 0, 00:15:50.909 "data_size": 0 00:15:50.909 } 00:15:50.909 ] 00:15:50.909 }' 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.909 23:55:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.168 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.168 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.168 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.168 [2024-11-02 23:55:45.259484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.168 [2024-11-02 23:55:45.259589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.427 [2024-11-02 23:55:45.271509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.427 [2024-11-02 23:55:45.273360] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.427 [2024-11-02 23:55:45.273443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.427 "name": "Existed_Raid", 00:15:51.427 "uuid": "7eecfda5-e48e-45a0-880b-ea83a7ad9dc2", 00:15:51.427 "strip_size_kb": 0, 00:15:51.427 "state": "configuring", 00:15:51.427 "raid_level": "raid1", 00:15:51.427 "superblock": true, 00:15:51.427 "num_base_bdevs": 2, 00:15:51.427 "num_base_bdevs_discovered": 1, 00:15:51.427 "num_base_bdevs_operational": 2, 00:15:51.427 "base_bdevs_list": [ 00:15:51.427 { 00:15:51.427 "name": "BaseBdev1", 00:15:51.427 "uuid": "b1b34d05-8d6a-4d43-b4cf-b7bb52e1f038", 00:15:51.427 "is_configured": true, 00:15:51.427 "data_offset": 256, 00:15:51.427 "data_size": 7936 00:15:51.427 }, 00:15:51.427 { 00:15:51.427 "name": "BaseBdev2", 00:15:51.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.427 "is_configured": false, 00:15:51.427 "data_offset": 0, 00:15:51.427 "data_size": 0 00:15:51.427 } 00:15:51.427 ] 00:15:51.427 }' 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.427 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 [2024-11-02 23:55:45.729584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.687 [2024-11-02 23:55:45.729859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:51.687 [2024-11-02 23:55:45.729896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:51.687 [2024-11-02 23:55:45.730033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:51.687 BaseBdev2 00:15:51.687 [2024-11-02 23:55:45.730143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:51.687 [2024-11-02 23:55:45.730163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:51.687 [2024-11-02 23:55:45.730236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 [ 00:15:51.687 { 00:15:51.687 "name": "BaseBdev2", 00:15:51.687 "aliases": [ 00:15:51.687 "0526d624-e288-455f-8d41-bb528f322ba3" 00:15:51.687 ], 00:15:51.687 "product_name": "Malloc disk", 00:15:51.687 "block_size": 4128, 00:15:51.687 "num_blocks": 8192, 00:15:51.687 "uuid": "0526d624-e288-455f-8d41-bb528f322ba3", 00:15:51.687 "md_size": 32, 00:15:51.687 "md_interleave": true, 00:15:51.687 "dif_type": 0, 00:15:51.687 "assigned_rate_limits": { 00:15:51.687 "rw_ios_per_sec": 0, 00:15:51.687 "rw_mbytes_per_sec": 0, 00:15:51.687 "r_mbytes_per_sec": 0, 00:15:51.687 "w_mbytes_per_sec": 0 00:15:51.687 }, 00:15:51.687 "claimed": true, 00:15:51.687 "claim_type": "exclusive_write", 00:15:51.687 "zoned": false, 00:15:51.687 "supported_io_types": { 00:15:51.687 "read": true, 00:15:51.687 "write": true, 00:15:51.687 "unmap": true, 00:15:51.687 "flush": true, 00:15:51.687 "reset": true, 00:15:51.687 "nvme_admin": false, 00:15:51.687 "nvme_io": false, 00:15:51.687 "nvme_io_md": false, 00:15:51.687 "write_zeroes": true, 00:15:51.687 "zcopy": true, 00:15:51.687 "get_zone_info": false, 00:15:51.687 "zone_management": false, 00:15:51.687 "zone_append": false, 00:15:51.687 "compare": false, 00:15:51.687 "compare_and_write": false, 00:15:51.687 "abort": true, 00:15:51.687 "seek_hole": false, 00:15:51.687 "seek_data": false, 00:15:51.687 "copy": true, 00:15:51.687 "nvme_iov_md": false 00:15:51.687 }, 00:15:51.687 "memory_domains": [ 00:15:51.687 { 00:15:51.687 "dma_device_id": "system", 00:15:51.687 "dma_device_type": 1 00:15:51.687 }, 00:15:51.687 { 00:15:51.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.687 "dma_device_type": 2 00:15:51.687 } 00:15:51.687 ], 00:15:51.687 "driver_specific": {} 00:15:51.687 } 00:15:51.687 ] 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.687 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.688 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.688 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.688 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.688 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.688 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.688 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.688 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.947 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.947 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.947 "name": "Existed_Raid", 00:15:51.947 "uuid": "7eecfda5-e48e-45a0-880b-ea83a7ad9dc2", 00:15:51.947 "strip_size_kb": 0, 00:15:51.947 "state": "online", 00:15:51.947 "raid_level": "raid1", 00:15:51.947 "superblock": true, 00:15:51.947 "num_base_bdevs": 2, 00:15:51.947 "num_base_bdevs_discovered": 2, 00:15:51.947 "num_base_bdevs_operational": 2, 00:15:51.947 "base_bdevs_list": [ 00:15:51.947 { 00:15:51.947 "name": "BaseBdev1", 00:15:51.947 "uuid": "b1b34d05-8d6a-4d43-b4cf-b7bb52e1f038", 00:15:51.947 "is_configured": true, 00:15:51.947 "data_offset": 256, 00:15:51.947 "data_size": 7936 00:15:51.947 }, 00:15:51.947 { 00:15:51.947 "name": "BaseBdev2", 00:15:51.947 "uuid": "0526d624-e288-455f-8d41-bb528f322ba3", 00:15:51.947 "is_configured": true, 00:15:51.947 "data_offset": 256, 00:15:51.947 "data_size": 7936 00:15:51.947 } 00:15:51.947 ] 00:15:51.947 }' 00:15:51.947 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.947 23:55:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.207 [2024-11-02 23:55:46.225044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.207 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.207 "name": "Existed_Raid", 00:15:52.207 "aliases": [ 00:15:52.207 "7eecfda5-e48e-45a0-880b-ea83a7ad9dc2" 00:15:52.207 ], 00:15:52.207 "product_name": "Raid Volume", 00:15:52.207 "block_size": 4128, 00:15:52.207 "num_blocks": 7936, 00:15:52.207 "uuid": "7eecfda5-e48e-45a0-880b-ea83a7ad9dc2", 00:15:52.207 "md_size": 32, 00:15:52.207 "md_interleave": true, 00:15:52.207 "dif_type": 0, 00:15:52.207 "assigned_rate_limits": { 00:15:52.207 "rw_ios_per_sec": 0, 00:15:52.207 "rw_mbytes_per_sec": 0, 00:15:52.207 "r_mbytes_per_sec": 0, 00:15:52.207 "w_mbytes_per_sec": 0 00:15:52.207 }, 00:15:52.207 "claimed": false, 00:15:52.207 "zoned": false, 00:15:52.207 "supported_io_types": { 00:15:52.207 "read": true, 00:15:52.207 "write": true, 00:15:52.207 "unmap": false, 00:15:52.207 "flush": false, 00:15:52.207 "reset": true, 00:15:52.207 "nvme_admin": false, 00:15:52.207 "nvme_io": false, 00:15:52.207 "nvme_io_md": false, 00:15:52.207 "write_zeroes": true, 00:15:52.207 "zcopy": false, 00:15:52.207 "get_zone_info": false, 00:15:52.207 "zone_management": false, 00:15:52.207 "zone_append": false, 00:15:52.207 "compare": false, 00:15:52.207 "compare_and_write": false, 00:15:52.207 "abort": false, 00:15:52.207 "seek_hole": false, 00:15:52.207 "seek_data": false, 00:15:52.207 "copy": false, 00:15:52.207 "nvme_iov_md": false 00:15:52.207 }, 00:15:52.207 "memory_domains": [ 00:15:52.207 { 00:15:52.207 "dma_device_id": "system", 00:15:52.207 "dma_device_type": 1 00:15:52.207 }, 00:15:52.207 { 00:15:52.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.207 "dma_device_type": 2 00:15:52.207 }, 00:15:52.207 { 00:15:52.207 "dma_device_id": "system", 00:15:52.207 "dma_device_type": 1 00:15:52.207 }, 00:15:52.207 { 00:15:52.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.208 "dma_device_type": 2 00:15:52.208 } 00:15:52.208 ], 00:15:52.208 "driver_specific": { 00:15:52.208 "raid": { 00:15:52.208 "uuid": "7eecfda5-e48e-45a0-880b-ea83a7ad9dc2", 00:15:52.208 "strip_size_kb": 0, 00:15:52.208 "state": "online", 00:15:52.208 "raid_level": "raid1", 00:15:52.208 "superblock": true, 00:15:52.208 "num_base_bdevs": 2, 00:15:52.208 "num_base_bdevs_discovered": 2, 00:15:52.208 "num_base_bdevs_operational": 2, 00:15:52.208 "base_bdevs_list": [ 00:15:52.208 { 00:15:52.208 "name": "BaseBdev1", 00:15:52.208 "uuid": "b1b34d05-8d6a-4d43-b4cf-b7bb52e1f038", 00:15:52.208 "is_configured": true, 00:15:52.208 "data_offset": 256, 00:15:52.208 "data_size": 7936 00:15:52.208 }, 00:15:52.208 { 00:15:52.208 "name": "BaseBdev2", 00:15:52.208 "uuid": "0526d624-e288-455f-8d41-bb528f322ba3", 00:15:52.208 "is_configured": true, 00:15:52.208 "data_offset": 256, 00:15:52.208 "data_size": 7936 00:15:52.208 } 00:15:52.208 ] 00:15:52.208 } 00:15:52.208 } 00:15:52.208 }' 00:15:52.208 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:52.468 BaseBdev2' 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.468 [2024-11-02 23:55:46.452463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.468 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.468 "name": "Existed_Raid", 00:15:52.468 "uuid": "7eecfda5-e48e-45a0-880b-ea83a7ad9dc2", 00:15:52.468 "strip_size_kb": 0, 00:15:52.468 "state": "online", 00:15:52.468 "raid_level": "raid1", 00:15:52.468 "superblock": true, 00:15:52.468 "num_base_bdevs": 2, 00:15:52.468 "num_base_bdevs_discovered": 1, 00:15:52.468 "num_base_bdevs_operational": 1, 00:15:52.468 "base_bdevs_list": [ 00:15:52.468 { 00:15:52.468 "name": null, 00:15:52.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.468 "is_configured": false, 00:15:52.468 "data_offset": 0, 00:15:52.468 "data_size": 7936 00:15:52.468 }, 00:15:52.468 { 00:15:52.468 "name": "BaseBdev2", 00:15:52.468 "uuid": "0526d624-e288-455f-8d41-bb528f322ba3", 00:15:52.468 "is_configured": true, 00:15:52.468 "data_offset": 256, 00:15:52.468 "data_size": 7936 00:15:52.468 } 00:15:52.468 ] 00:15:52.468 }' 00:15:52.469 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.469 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.037 [2024-11-02 23:55:46.947513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.037 [2024-11-02 23:55:46.947613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.037 [2024-11-02 23:55:46.959636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.037 [2024-11-02 23:55:46.959682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.037 [2024-11-02 23:55:46.959693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.037 23:55:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98576 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 98576 ']' 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 98576 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98576 00:15:53.037 killing process with pid 98576 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98576' 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 98576 00:15:53.037 [2024-11-02 23:55:47.056415] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.037 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 98576 00:15:53.037 [2024-11-02 23:55:47.057395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.298 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:15:53.298 ************************************ 00:15:53.298 END TEST raid_state_function_test_sb_md_interleaved 00:15:53.298 00:15:53.298 real 0m3.912s 00:15:53.298 user 0m6.146s 00:15:53.298 sys 0m0.887s 00:15:53.298 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:53.298 23:55:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.298 ************************************ 00:15:53.298 23:55:47 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:15:53.298 23:55:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:53.298 23:55:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:53.298 23:55:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.298 ************************************ 00:15:53.298 START TEST raid_superblock_test_md_interleaved 00:15:53.298 ************************************ 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=98817 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 98817 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 98817 ']' 00:15:53.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:53.298 23:55:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.561 [2024-11-02 23:55:47.445445] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:15:53.562 [2024-11-02 23:55:47.445669] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98817 ] 00:15:53.562 [2024-11-02 23:55:47.603642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.562 [2024-11-02 23:55:47.630236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.834 [2024-11-02 23:55:47.673950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.834 [2024-11-02 23:55:47.674063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.416 malloc1 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.416 [2024-11-02 23:55:48.284133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.416 [2024-11-02 23:55:48.284288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.416 [2024-11-02 23:55:48.284317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:54.416 [2024-11-02 23:55:48.284328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.416 [2024-11-02 23:55:48.286186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.416 [2024-11-02 23:55:48.286226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.416 pt1 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.416 malloc2 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.416 [2024-11-02 23:55:48.316789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.416 [2024-11-02 23:55:48.316889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.416 [2024-11-02 23:55:48.316937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:54.416 [2024-11-02 23:55:48.316966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.416 [2024-11-02 23:55:48.318822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.416 [2024-11-02 23:55:48.318891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.416 pt2 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.416 [2024-11-02 23:55:48.328806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:54.416 [2024-11-02 23:55:48.330680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.416 [2024-11-02 23:55:48.330897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:54.416 [2024-11-02 23:55:48.330953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:54.416 [2024-11-02 23:55:48.331060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:54.416 [2024-11-02 23:55:48.331158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:54.416 [2024-11-02 23:55:48.331202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:54.416 [2024-11-02 23:55:48.331310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.416 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.416 "name": "raid_bdev1", 00:15:54.416 "uuid": "f8308b09-f0b1-400b-b1a1-206abe1a80c6", 00:15:54.416 "strip_size_kb": 0, 00:15:54.416 "state": "online", 00:15:54.416 "raid_level": "raid1", 00:15:54.416 "superblock": true, 00:15:54.416 "num_base_bdevs": 2, 00:15:54.416 "num_base_bdevs_discovered": 2, 00:15:54.416 "num_base_bdevs_operational": 2, 00:15:54.416 "base_bdevs_list": [ 00:15:54.416 { 00:15:54.416 "name": "pt1", 00:15:54.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.416 "is_configured": true, 00:15:54.416 "data_offset": 256, 00:15:54.416 "data_size": 7936 00:15:54.416 }, 00:15:54.416 { 00:15:54.416 "name": "pt2", 00:15:54.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.417 "is_configured": true, 00:15:54.417 "data_offset": 256, 00:15:54.417 "data_size": 7936 00:15:54.417 } 00:15:54.417 ] 00:15:54.417 }' 00:15:54.417 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.417 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.676 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:54.676 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:54.676 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:54.676 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:54.676 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:54.676 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.676 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.676 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.676 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.676 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.676 [2024-11-02 23:55:48.764310] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.953 "name": "raid_bdev1", 00:15:54.953 "aliases": [ 00:15:54.953 "f8308b09-f0b1-400b-b1a1-206abe1a80c6" 00:15:54.953 ], 00:15:54.953 "product_name": "Raid Volume", 00:15:54.953 "block_size": 4128, 00:15:54.953 "num_blocks": 7936, 00:15:54.953 "uuid": "f8308b09-f0b1-400b-b1a1-206abe1a80c6", 00:15:54.953 "md_size": 32, 00:15:54.953 "md_interleave": true, 00:15:54.953 "dif_type": 0, 00:15:54.953 "assigned_rate_limits": { 00:15:54.953 "rw_ios_per_sec": 0, 00:15:54.953 "rw_mbytes_per_sec": 0, 00:15:54.953 "r_mbytes_per_sec": 0, 00:15:54.953 "w_mbytes_per_sec": 0 00:15:54.953 }, 00:15:54.953 "claimed": false, 00:15:54.953 "zoned": false, 00:15:54.953 "supported_io_types": { 00:15:54.953 "read": true, 00:15:54.953 "write": true, 00:15:54.953 "unmap": false, 00:15:54.953 "flush": false, 00:15:54.953 "reset": true, 00:15:54.953 "nvme_admin": false, 00:15:54.953 "nvme_io": false, 00:15:54.953 "nvme_io_md": false, 00:15:54.953 "write_zeroes": true, 00:15:54.953 "zcopy": false, 00:15:54.953 "get_zone_info": false, 00:15:54.953 "zone_management": false, 00:15:54.953 "zone_append": false, 00:15:54.953 "compare": false, 00:15:54.953 "compare_and_write": false, 00:15:54.953 "abort": false, 00:15:54.953 "seek_hole": false, 00:15:54.953 "seek_data": false, 00:15:54.953 "copy": false, 00:15:54.953 "nvme_iov_md": false 00:15:54.953 }, 00:15:54.953 "memory_domains": [ 00:15:54.953 { 00:15:54.953 "dma_device_id": "system", 00:15:54.953 "dma_device_type": 1 00:15:54.953 }, 00:15:54.953 { 00:15:54.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.953 "dma_device_type": 2 00:15:54.953 }, 00:15:54.953 { 00:15:54.953 "dma_device_id": "system", 00:15:54.953 "dma_device_type": 1 00:15:54.953 }, 00:15:54.953 { 00:15:54.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.953 "dma_device_type": 2 00:15:54.953 } 00:15:54.953 ], 00:15:54.953 "driver_specific": { 00:15:54.953 "raid": { 00:15:54.953 "uuid": "f8308b09-f0b1-400b-b1a1-206abe1a80c6", 00:15:54.953 "strip_size_kb": 0, 00:15:54.953 "state": "online", 00:15:54.953 "raid_level": "raid1", 00:15:54.953 "superblock": true, 00:15:54.953 "num_base_bdevs": 2, 00:15:54.953 "num_base_bdevs_discovered": 2, 00:15:54.953 "num_base_bdevs_operational": 2, 00:15:54.953 "base_bdevs_list": [ 00:15:54.953 { 00:15:54.953 "name": "pt1", 00:15:54.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.953 "is_configured": true, 00:15:54.953 "data_offset": 256, 00:15:54.953 "data_size": 7936 00:15:54.953 }, 00:15:54.953 { 00:15:54.953 "name": "pt2", 00:15:54.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.953 "is_configured": true, 00:15:54.953 "data_offset": 256, 00:15:54.953 "data_size": 7936 00:15:54.953 } 00:15:54.953 ] 00:15:54.953 } 00:15:54.953 } 00:15:54.953 }' 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:54.953 pt2' 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.953 23:55:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:54.953 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:54.953 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.953 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.953 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.953 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:54.953 [2024-11-02 23:55:49.007815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.953 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f8308b09-f0b1-400b-b1a1-206abe1a80c6 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f8308b09-f0b1-400b-b1a1-206abe1a80c6 ']' 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.213 [2024-11-02 23:55:49.063502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.213 [2024-11-02 23:55:49.063528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.213 [2024-11-02 23:55:49.063603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.213 [2024-11-02 23:55:49.063666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.213 [2024-11-02 23:55:49.063675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.213 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.213 [2024-11-02 23:55:49.207245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:55.213 [2024-11-02 23:55:49.209045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:55.213 [2024-11-02 23:55:49.209121] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:55.213 [2024-11-02 23:55:49.209162] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:55.213 [2024-11-02 23:55:49.209178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.213 [2024-11-02 23:55:49.209186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:55.213 request: 00:15:55.213 { 00:15:55.213 "name": "raid_bdev1", 00:15:55.213 "raid_level": "raid1", 00:15:55.213 "base_bdevs": [ 00:15:55.213 "malloc1", 00:15:55.213 "malloc2" 00:15:55.213 ], 00:15:55.213 "superblock": false, 00:15:55.214 "method": "bdev_raid_create", 00:15:55.214 "req_id": 1 00:15:55.214 } 00:15:55.214 Got JSON-RPC error response 00:15:55.214 response: 00:15:55.214 { 00:15:55.214 "code": -17, 00:15:55.214 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:55.214 } 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.214 [2024-11-02 23:55:49.271108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.214 [2024-11-02 23:55:49.271215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.214 [2024-11-02 23:55:49.271249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.214 [2024-11-02 23:55:49.271275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.214 [2024-11-02 23:55:49.273137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.214 [2024-11-02 23:55:49.273196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.214 [2024-11-02 23:55:49.273272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:55.214 [2024-11-02 23:55:49.273339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.214 pt1 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.214 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.473 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.473 "name": "raid_bdev1", 00:15:55.473 "uuid": "f8308b09-f0b1-400b-b1a1-206abe1a80c6", 00:15:55.473 "strip_size_kb": 0, 00:15:55.473 "state": "configuring", 00:15:55.473 "raid_level": "raid1", 00:15:55.473 "superblock": true, 00:15:55.473 "num_base_bdevs": 2, 00:15:55.473 "num_base_bdevs_discovered": 1, 00:15:55.473 "num_base_bdevs_operational": 2, 00:15:55.473 "base_bdevs_list": [ 00:15:55.473 { 00:15:55.473 "name": "pt1", 00:15:55.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.473 "is_configured": true, 00:15:55.473 "data_offset": 256, 00:15:55.473 "data_size": 7936 00:15:55.473 }, 00:15:55.473 { 00:15:55.473 "name": null, 00:15:55.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.473 "is_configured": false, 00:15:55.473 "data_offset": 256, 00:15:55.473 "data_size": 7936 00:15:55.473 } 00:15:55.473 ] 00:15:55.473 }' 00:15:55.473 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.473 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.732 [2024-11-02 23:55:49.726509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.732 [2024-11-02 23:55:49.726561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.732 [2024-11-02 23:55:49.726605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:55.732 [2024-11-02 23:55:49.726614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.732 [2024-11-02 23:55:49.726797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.732 [2024-11-02 23:55:49.726811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.732 [2024-11-02 23:55:49.726858] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:55.732 [2024-11-02 23:55:49.726884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.732 [2024-11-02 23:55:49.726966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:55.732 [2024-11-02 23:55:49.726975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:55.732 [2024-11-02 23:55:49.727043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:55.732 [2024-11-02 23:55:49.727100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:55.732 [2024-11-02 23:55:49.727117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:55.732 [2024-11-02 23:55:49.727169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.732 pt2 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.732 "name": "raid_bdev1", 00:15:55.732 "uuid": "f8308b09-f0b1-400b-b1a1-206abe1a80c6", 00:15:55.732 "strip_size_kb": 0, 00:15:55.732 "state": "online", 00:15:55.732 "raid_level": "raid1", 00:15:55.732 "superblock": true, 00:15:55.732 "num_base_bdevs": 2, 00:15:55.732 "num_base_bdevs_discovered": 2, 00:15:55.732 "num_base_bdevs_operational": 2, 00:15:55.732 "base_bdevs_list": [ 00:15:55.732 { 00:15:55.732 "name": "pt1", 00:15:55.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.732 "is_configured": true, 00:15:55.732 "data_offset": 256, 00:15:55.732 "data_size": 7936 00:15:55.732 }, 00:15:55.732 { 00:15:55.732 "name": "pt2", 00:15:55.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.732 "is_configured": true, 00:15:55.732 "data_offset": 256, 00:15:55.732 "data_size": 7936 00:15:55.732 } 00:15:55.732 ] 00:15:55.732 }' 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.732 23:55:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.301 [2024-11-02 23:55:50.209956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.301 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.301 "name": "raid_bdev1", 00:15:56.301 "aliases": [ 00:15:56.301 "f8308b09-f0b1-400b-b1a1-206abe1a80c6" 00:15:56.301 ], 00:15:56.301 "product_name": "Raid Volume", 00:15:56.301 "block_size": 4128, 00:15:56.301 "num_blocks": 7936, 00:15:56.301 "uuid": "f8308b09-f0b1-400b-b1a1-206abe1a80c6", 00:15:56.301 "md_size": 32, 00:15:56.301 "md_interleave": true, 00:15:56.301 "dif_type": 0, 00:15:56.301 "assigned_rate_limits": { 00:15:56.301 "rw_ios_per_sec": 0, 00:15:56.301 "rw_mbytes_per_sec": 0, 00:15:56.301 "r_mbytes_per_sec": 0, 00:15:56.301 "w_mbytes_per_sec": 0 00:15:56.301 }, 00:15:56.301 "claimed": false, 00:15:56.301 "zoned": false, 00:15:56.301 "supported_io_types": { 00:15:56.301 "read": true, 00:15:56.301 "write": true, 00:15:56.301 "unmap": false, 00:15:56.301 "flush": false, 00:15:56.301 "reset": true, 00:15:56.301 "nvme_admin": false, 00:15:56.301 "nvme_io": false, 00:15:56.301 "nvme_io_md": false, 00:15:56.301 "write_zeroes": true, 00:15:56.301 "zcopy": false, 00:15:56.301 "get_zone_info": false, 00:15:56.301 "zone_management": false, 00:15:56.301 "zone_append": false, 00:15:56.301 "compare": false, 00:15:56.301 "compare_and_write": false, 00:15:56.301 "abort": false, 00:15:56.301 "seek_hole": false, 00:15:56.301 "seek_data": false, 00:15:56.301 "copy": false, 00:15:56.301 "nvme_iov_md": false 00:15:56.301 }, 00:15:56.301 "memory_domains": [ 00:15:56.301 { 00:15:56.301 "dma_device_id": "system", 00:15:56.301 "dma_device_type": 1 00:15:56.301 }, 00:15:56.301 { 00:15:56.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.301 "dma_device_type": 2 00:15:56.301 }, 00:15:56.301 { 00:15:56.301 "dma_device_id": "system", 00:15:56.301 "dma_device_type": 1 00:15:56.301 }, 00:15:56.301 { 00:15:56.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.301 "dma_device_type": 2 00:15:56.301 } 00:15:56.301 ], 00:15:56.301 "driver_specific": { 00:15:56.301 "raid": { 00:15:56.301 "uuid": "f8308b09-f0b1-400b-b1a1-206abe1a80c6", 00:15:56.301 "strip_size_kb": 0, 00:15:56.301 "state": "online", 00:15:56.301 "raid_level": "raid1", 00:15:56.301 "superblock": true, 00:15:56.301 "num_base_bdevs": 2, 00:15:56.302 "num_base_bdevs_discovered": 2, 00:15:56.302 "num_base_bdevs_operational": 2, 00:15:56.302 "base_bdevs_list": [ 00:15:56.302 { 00:15:56.302 "name": "pt1", 00:15:56.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.302 "is_configured": true, 00:15:56.302 "data_offset": 256, 00:15:56.302 "data_size": 7936 00:15:56.302 }, 00:15:56.302 { 00:15:56.302 "name": "pt2", 00:15:56.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.302 "is_configured": true, 00:15:56.302 "data_offset": 256, 00:15:56.302 "data_size": 7936 00:15:56.302 } 00:15:56.302 ] 00:15:56.302 } 00:15:56.302 } 00:15:56.302 }' 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:56.302 pt2' 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.302 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.566 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.567 [2024-11-02 23:55:50.421542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f8308b09-f0b1-400b-b1a1-206abe1a80c6 '!=' f8308b09-f0b1-400b-b1a1-206abe1a80c6 ']' 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.567 [2024-11-02 23:55:50.465269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.567 "name": "raid_bdev1", 00:15:56.567 "uuid": "f8308b09-f0b1-400b-b1a1-206abe1a80c6", 00:15:56.567 "strip_size_kb": 0, 00:15:56.567 "state": "online", 00:15:56.567 "raid_level": "raid1", 00:15:56.567 "superblock": true, 00:15:56.567 "num_base_bdevs": 2, 00:15:56.567 "num_base_bdevs_discovered": 1, 00:15:56.567 "num_base_bdevs_operational": 1, 00:15:56.567 "base_bdevs_list": [ 00:15:56.567 { 00:15:56.567 "name": null, 00:15:56.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.567 "is_configured": false, 00:15:56.567 "data_offset": 0, 00:15:56.567 "data_size": 7936 00:15:56.567 }, 00:15:56.567 { 00:15:56.567 "name": "pt2", 00:15:56.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.567 "is_configured": true, 00:15:56.567 "data_offset": 256, 00:15:56.567 "data_size": 7936 00:15:56.567 } 00:15:56.567 ] 00:15:56.567 }' 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.567 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.144 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.144 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.144 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.144 [2024-11-02 23:55:50.976444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.144 [2024-11-02 23:55:50.976524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.144 [2024-11-02 23:55:50.976623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.144 [2024-11-02 23:55:50.976686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.144 [2024-11-02 23:55:50.976734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:57.144 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.144 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.144 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:57.144 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.144 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.144 23:55:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.144 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.144 [2024-11-02 23:55:51.052292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.144 [2024-11-02 23:55:51.052387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.144 [2024-11-02 23:55:51.052441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:57.144 [2024-11-02 23:55:51.052468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.144 [2024-11-02 23:55:51.054189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.144 [2024-11-02 23:55:51.054252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.144 [2024-11-02 23:55:51.054319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:57.144 [2024-11-02 23:55:51.054380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.144 [2024-11-02 23:55:51.054453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:57.144 [2024-11-02 23:55:51.054483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:57.145 [2024-11-02 23:55:51.054592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:57.145 [2024-11-02 23:55:51.054693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:57.145 [2024-11-02 23:55:51.054728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:57.145 [2024-11-02 23:55:51.054828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.145 pt2 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.145 "name": "raid_bdev1", 00:15:57.145 "uuid": "f8308b09-f0b1-400b-b1a1-206abe1a80c6", 00:15:57.145 "strip_size_kb": 0, 00:15:57.145 "state": "online", 00:15:57.145 "raid_level": "raid1", 00:15:57.145 "superblock": true, 00:15:57.145 "num_base_bdevs": 2, 00:15:57.145 "num_base_bdevs_discovered": 1, 00:15:57.145 "num_base_bdevs_operational": 1, 00:15:57.145 "base_bdevs_list": [ 00:15:57.145 { 00:15:57.145 "name": null, 00:15:57.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.145 "is_configured": false, 00:15:57.145 "data_offset": 256, 00:15:57.145 "data_size": 7936 00:15:57.145 }, 00:15:57.145 { 00:15:57.145 "name": "pt2", 00:15:57.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.145 "is_configured": true, 00:15:57.145 "data_offset": 256, 00:15:57.145 "data_size": 7936 00:15:57.145 } 00:15:57.145 ] 00:15:57.145 }' 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.145 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.714 [2024-11-02 23:55:51.511491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.714 [2024-11-02 23:55:51.511515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.714 [2024-11-02 23:55:51.511575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.714 [2024-11-02 23:55:51.511616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.714 [2024-11-02 23:55:51.511629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.714 [2024-11-02 23:55:51.571401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.714 [2024-11-02 23:55:51.571476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.714 [2024-11-02 23:55:51.571495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:57.714 [2024-11-02 23:55:51.571507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.714 [2024-11-02 23:55:51.573366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.714 [2024-11-02 23:55:51.573394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.714 [2024-11-02 23:55:51.573436] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:57.714 [2024-11-02 23:55:51.573468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.714 [2024-11-02 23:55:51.573550] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:57.714 [2024-11-02 23:55:51.573569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.714 [2024-11-02 23:55:51.573591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:57.714 [2024-11-02 23:55:51.573626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.714 [2024-11-02 23:55:51.573683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:57.714 [2024-11-02 23:55:51.573693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:57.714 [2024-11-02 23:55:51.573786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:57.714 [2024-11-02 23:55:51.573864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:57.714 [2024-11-02 23:55:51.573871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:57.714 [2024-11-02 23:55:51.573929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.714 pt1 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.714 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.715 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.715 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.715 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.715 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.715 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.715 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.715 "name": "raid_bdev1", 00:15:57.715 "uuid": "f8308b09-f0b1-400b-b1a1-206abe1a80c6", 00:15:57.715 "strip_size_kb": 0, 00:15:57.715 "state": "online", 00:15:57.715 "raid_level": "raid1", 00:15:57.715 "superblock": true, 00:15:57.715 "num_base_bdevs": 2, 00:15:57.715 "num_base_bdevs_discovered": 1, 00:15:57.715 "num_base_bdevs_operational": 1, 00:15:57.715 "base_bdevs_list": [ 00:15:57.715 { 00:15:57.715 "name": null, 00:15:57.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.715 "is_configured": false, 00:15:57.715 "data_offset": 256, 00:15:57.715 "data_size": 7936 00:15:57.715 }, 00:15:57.715 { 00:15:57.715 "name": "pt2", 00:15:57.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.715 "is_configured": true, 00:15:57.715 "data_offset": 256, 00:15:57.715 "data_size": 7936 00:15:57.715 } 00:15:57.715 ] 00:15:57.715 }' 00:15:57.715 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.715 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.973 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:57.973 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:57.973 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.973 23:55:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.973 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.973 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:57.973 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.973 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:57.973 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.973 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.973 [2024-11-02 23:55:52.034939] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.973 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.232 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f8308b09-f0b1-400b-b1a1-206abe1a80c6 '!=' f8308b09-f0b1-400b-b1a1-206abe1a80c6 ']' 00:15:58.232 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 98817 00:15:58.232 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 98817 ']' 00:15:58.232 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 98817 00:15:58.232 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:15:58.232 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:58.232 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98817 00:15:58.232 killing process with pid 98817 00:15:58.232 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:58.232 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:58.232 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98817' 00:15:58.233 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 98817 00:15:58.233 [2024-11-02 23:55:52.117086] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.233 [2024-11-02 23:55:52.117162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.233 [2024-11-02 23:55:52.117207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.233 [2024-11-02 23:55:52.117215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:58.233 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 98817 00:15:58.233 [2024-11-02 23:55:52.140236] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.492 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:15:58.492 00:15:58.492 real 0m5.006s 00:15:58.492 user 0m8.224s 00:15:58.492 sys 0m1.111s 00:15:58.492 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:58.492 23:55:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.492 ************************************ 00:15:58.492 END TEST raid_superblock_test_md_interleaved 00:15:58.492 ************************************ 00:15:58.492 23:55:52 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:15:58.492 23:55:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:58.492 23:55:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:58.492 23:55:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.492 ************************************ 00:15:58.492 START TEST raid_rebuild_test_sb_md_interleaved 00:15:58.492 ************************************ 00:15:58.492 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:15:58.492 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:58.492 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:58.492 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:58.492 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:58.492 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:15:58.492 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99130 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99130 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 99130 ']' 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:58.493 23:55:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.493 [2024-11-02 23:55:52.544461] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:15:58.493 [2024-11-02 23:55:52.544664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:58.493 Zero copy mechanism will not be used. 00:15:58.493 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99130 ] 00:15:58.752 [2024-11-02 23:55:52.703604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.752 [2024-11-02 23:55:52.730112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.752 [2024-11-02 23:55:52.772652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.752 [2024-11-02 23:55:52.772808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 BaseBdev1_malloc 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 [2024-11-02 23:55:53.362457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:59.320 [2024-11-02 23:55:53.362541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.320 [2024-11-02 23:55:53.362571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:59.320 [2024-11-02 23:55:53.362587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.320 [2024-11-02 23:55:53.364484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.320 [2024-11-02 23:55:53.364530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.320 BaseBdev1 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 BaseBdev2_malloc 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.320 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.320 [2024-11-02 23:55:53.391113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:59.320 [2024-11-02 23:55:53.391164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.320 [2024-11-02 23:55:53.391198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:59.320 [2024-11-02 23:55:53.391206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.320 [2024-11-02 23:55:53.393090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.320 [2024-11-02 23:55:53.393131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.321 BaseBdev2 00:15:59.321 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.321 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:15:59.321 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.321 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.580 spare_malloc 00:15:59.580 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.580 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:59.580 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.580 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.580 spare_delay 00:15:59.580 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.580 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.580 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.580 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.580 [2024-11-02 23:55:53.431852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.580 [2024-11-02 23:55:53.431907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.581 [2024-11-02 23:55:53.431954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:59.581 [2024-11-02 23:55:53.431962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.581 [2024-11-02 23:55:53.433849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.581 [2024-11-02 23:55:53.433943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.581 spare 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.581 [2024-11-02 23:55:53.443889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.581 [2024-11-02 23:55:53.445671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.581 [2024-11-02 23:55:53.445838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:59.581 [2024-11-02 23:55:53.445852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:59.581 [2024-11-02 23:55:53.445941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:59.581 [2024-11-02 23:55:53.446012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:59.581 [2024-11-02 23:55:53.446023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:59.581 [2024-11-02 23:55:53.446093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.581 "name": "raid_bdev1", 00:15:59.581 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:15:59.581 "strip_size_kb": 0, 00:15:59.581 "state": "online", 00:15:59.581 "raid_level": "raid1", 00:15:59.581 "superblock": true, 00:15:59.581 "num_base_bdevs": 2, 00:15:59.581 "num_base_bdevs_discovered": 2, 00:15:59.581 "num_base_bdevs_operational": 2, 00:15:59.581 "base_bdevs_list": [ 00:15:59.581 { 00:15:59.581 "name": "BaseBdev1", 00:15:59.581 "uuid": "d3624e57-c4ef-5210-bd39-1af46b6a8cd7", 00:15:59.581 "is_configured": true, 00:15:59.581 "data_offset": 256, 00:15:59.581 "data_size": 7936 00:15:59.581 }, 00:15:59.581 { 00:15:59.581 "name": "BaseBdev2", 00:15:59.581 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:15:59.581 "is_configured": true, 00:15:59.581 "data_offset": 256, 00:15:59.581 "data_size": 7936 00:15:59.581 } 00:15:59.581 ] 00:15:59.581 }' 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.581 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.840 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.840 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.840 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.840 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:59.840 [2024-11-02 23:55:53.903298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.840 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.100 23:55:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.100 [2024-11-02 23:55:53.998879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.100 "name": "raid_bdev1", 00:16:00.100 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:00.100 "strip_size_kb": 0, 00:16:00.100 "state": "online", 00:16:00.100 "raid_level": "raid1", 00:16:00.100 "superblock": true, 00:16:00.100 "num_base_bdevs": 2, 00:16:00.100 "num_base_bdevs_discovered": 1, 00:16:00.100 "num_base_bdevs_operational": 1, 00:16:00.100 "base_bdevs_list": [ 00:16:00.100 { 00:16:00.100 "name": null, 00:16:00.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.100 "is_configured": false, 00:16:00.100 "data_offset": 0, 00:16:00.100 "data_size": 7936 00:16:00.100 }, 00:16:00.100 { 00:16:00.100 "name": "BaseBdev2", 00:16:00.100 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:00.100 "is_configured": true, 00:16:00.100 "data_offset": 256, 00:16:00.100 "data_size": 7936 00:16:00.100 } 00:16:00.100 ] 00:16:00.100 }' 00:16:00.100 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.101 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.669 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.669 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.669 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.669 [2024-11-02 23:55:54.466108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.669 [2024-11-02 23:55:54.482034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:00.670 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.670 23:55:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:00.670 [2024-11-02 23:55:54.488825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.608 "name": "raid_bdev1", 00:16:01.608 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:01.608 "strip_size_kb": 0, 00:16:01.608 "state": "online", 00:16:01.608 "raid_level": "raid1", 00:16:01.608 "superblock": true, 00:16:01.608 "num_base_bdevs": 2, 00:16:01.608 "num_base_bdevs_discovered": 2, 00:16:01.608 "num_base_bdevs_operational": 2, 00:16:01.608 "process": { 00:16:01.608 "type": "rebuild", 00:16:01.608 "target": "spare", 00:16:01.608 "progress": { 00:16:01.608 "blocks": 2560, 00:16:01.608 "percent": 32 00:16:01.608 } 00:16:01.608 }, 00:16:01.608 "base_bdevs_list": [ 00:16:01.608 { 00:16:01.608 "name": "spare", 00:16:01.608 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:01.608 "is_configured": true, 00:16:01.608 "data_offset": 256, 00:16:01.608 "data_size": 7936 00:16:01.608 }, 00:16:01.608 { 00:16:01.608 "name": "BaseBdev2", 00:16:01.608 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:01.608 "is_configured": true, 00:16:01.608 "data_offset": 256, 00:16:01.608 "data_size": 7936 00:16:01.608 } 00:16:01.608 ] 00:16:01.608 }' 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.608 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.608 [2024-11-02 23:55:55.643578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.608 [2024-11-02 23:55:55.694036] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:01.608 [2024-11-02 23:55:55.694134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.608 [2024-11-02 23:55:55.694171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.608 [2024-11-02 23:55:55.694180] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.868 "name": "raid_bdev1", 00:16:01.868 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:01.868 "strip_size_kb": 0, 00:16:01.868 "state": "online", 00:16:01.868 "raid_level": "raid1", 00:16:01.868 "superblock": true, 00:16:01.868 "num_base_bdevs": 2, 00:16:01.868 "num_base_bdevs_discovered": 1, 00:16:01.868 "num_base_bdevs_operational": 1, 00:16:01.868 "base_bdevs_list": [ 00:16:01.868 { 00:16:01.868 "name": null, 00:16:01.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.868 "is_configured": false, 00:16:01.868 "data_offset": 0, 00:16:01.868 "data_size": 7936 00:16:01.868 }, 00:16:01.868 { 00:16:01.868 "name": "BaseBdev2", 00:16:01.868 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:01.868 "is_configured": true, 00:16:01.868 "data_offset": 256, 00:16:01.868 "data_size": 7936 00:16:01.868 } 00:16:01.868 ] 00:16:01.868 }' 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.868 23:55:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.128 "name": "raid_bdev1", 00:16:02.128 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:02.128 "strip_size_kb": 0, 00:16:02.128 "state": "online", 00:16:02.128 "raid_level": "raid1", 00:16:02.128 "superblock": true, 00:16:02.128 "num_base_bdevs": 2, 00:16:02.128 "num_base_bdevs_discovered": 1, 00:16:02.128 "num_base_bdevs_operational": 1, 00:16:02.128 "base_bdevs_list": [ 00:16:02.128 { 00:16:02.128 "name": null, 00:16:02.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.128 "is_configured": false, 00:16:02.128 "data_offset": 0, 00:16:02.128 "data_size": 7936 00:16:02.128 }, 00:16:02.128 { 00:16:02.128 "name": "BaseBdev2", 00:16:02.128 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:02.128 "is_configured": true, 00:16:02.128 "data_offset": 256, 00:16:02.128 "data_size": 7936 00:16:02.128 } 00:16:02.128 ] 00:16:02.128 }' 00:16:02.128 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.387 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.387 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.387 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.387 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:02.388 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.388 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.388 [2024-11-02 23:55:56.301106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.388 [2024-11-02 23:55:56.304697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:02.388 [2024-11-02 23:55:56.306590] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.388 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.388 23:55:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.326 "name": "raid_bdev1", 00:16:03.326 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:03.326 "strip_size_kb": 0, 00:16:03.326 "state": "online", 00:16:03.326 "raid_level": "raid1", 00:16:03.326 "superblock": true, 00:16:03.326 "num_base_bdevs": 2, 00:16:03.326 "num_base_bdevs_discovered": 2, 00:16:03.326 "num_base_bdevs_operational": 2, 00:16:03.326 "process": { 00:16:03.326 "type": "rebuild", 00:16:03.326 "target": "spare", 00:16:03.326 "progress": { 00:16:03.326 "blocks": 2560, 00:16:03.326 "percent": 32 00:16:03.326 } 00:16:03.326 }, 00:16:03.326 "base_bdevs_list": [ 00:16:03.326 { 00:16:03.326 "name": "spare", 00:16:03.326 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:03.326 "is_configured": true, 00:16:03.326 "data_offset": 256, 00:16:03.326 "data_size": 7936 00:16:03.326 }, 00:16:03.326 { 00:16:03.326 "name": "BaseBdev2", 00:16:03.326 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:03.326 "is_configured": true, 00:16:03.326 "data_offset": 256, 00:16:03.326 "data_size": 7936 00:16:03.326 } 00:16:03.326 ] 00:16:03.326 }' 00:16:03.326 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.327 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.327 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:03.586 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=616 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.586 "name": "raid_bdev1", 00:16:03.586 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:03.586 "strip_size_kb": 0, 00:16:03.586 "state": "online", 00:16:03.586 "raid_level": "raid1", 00:16:03.586 "superblock": true, 00:16:03.586 "num_base_bdevs": 2, 00:16:03.586 "num_base_bdevs_discovered": 2, 00:16:03.586 "num_base_bdevs_operational": 2, 00:16:03.586 "process": { 00:16:03.586 "type": "rebuild", 00:16:03.586 "target": "spare", 00:16:03.586 "progress": { 00:16:03.586 "blocks": 2816, 00:16:03.586 "percent": 35 00:16:03.586 } 00:16:03.586 }, 00:16:03.586 "base_bdevs_list": [ 00:16:03.586 { 00:16:03.586 "name": "spare", 00:16:03.586 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:03.586 "is_configured": true, 00:16:03.586 "data_offset": 256, 00:16:03.586 "data_size": 7936 00:16:03.586 }, 00:16:03.586 { 00:16:03.586 "name": "BaseBdev2", 00:16:03.586 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:03.586 "is_configured": true, 00:16:03.586 "data_offset": 256, 00:16:03.586 "data_size": 7936 00:16:03.586 } 00:16:03.586 ] 00:16:03.586 }' 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.586 23:55:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.523 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.782 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.782 "name": "raid_bdev1", 00:16:04.782 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:04.782 "strip_size_kb": 0, 00:16:04.782 "state": "online", 00:16:04.782 "raid_level": "raid1", 00:16:04.782 "superblock": true, 00:16:04.782 "num_base_bdevs": 2, 00:16:04.782 "num_base_bdevs_discovered": 2, 00:16:04.782 "num_base_bdevs_operational": 2, 00:16:04.782 "process": { 00:16:04.782 "type": "rebuild", 00:16:04.782 "target": "spare", 00:16:04.782 "progress": { 00:16:04.782 "blocks": 5632, 00:16:04.782 "percent": 70 00:16:04.782 } 00:16:04.782 }, 00:16:04.782 "base_bdevs_list": [ 00:16:04.782 { 00:16:04.782 "name": "spare", 00:16:04.782 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:04.782 "is_configured": true, 00:16:04.782 "data_offset": 256, 00:16:04.782 "data_size": 7936 00:16:04.782 }, 00:16:04.782 { 00:16:04.782 "name": "BaseBdev2", 00:16:04.782 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:04.782 "is_configured": true, 00:16:04.782 "data_offset": 256, 00:16:04.782 "data_size": 7936 00:16:04.782 } 00:16:04.782 ] 00:16:04.782 }' 00:16:04.782 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.782 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.782 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.782 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.782 23:55:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.349 [2024-11-02 23:55:59.417137] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:05.349 [2024-11-02 23:55:59.417204] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:05.349 [2024-11-02 23:55:59.417307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.916 "name": "raid_bdev1", 00:16:05.916 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:05.916 "strip_size_kb": 0, 00:16:05.916 "state": "online", 00:16:05.916 "raid_level": "raid1", 00:16:05.916 "superblock": true, 00:16:05.916 "num_base_bdevs": 2, 00:16:05.916 "num_base_bdevs_discovered": 2, 00:16:05.916 "num_base_bdevs_operational": 2, 00:16:05.916 "base_bdevs_list": [ 00:16:05.916 { 00:16:05.916 "name": "spare", 00:16:05.916 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:05.916 "is_configured": true, 00:16:05.916 "data_offset": 256, 00:16:05.916 "data_size": 7936 00:16:05.916 }, 00:16:05.916 { 00:16:05.916 "name": "BaseBdev2", 00:16:05.916 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:05.916 "is_configured": true, 00:16:05.916 "data_offset": 256, 00:16:05.916 "data_size": 7936 00:16:05.916 } 00:16:05.916 ] 00:16:05.916 }' 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.916 "name": "raid_bdev1", 00:16:05.916 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:05.916 "strip_size_kb": 0, 00:16:05.916 "state": "online", 00:16:05.916 "raid_level": "raid1", 00:16:05.916 "superblock": true, 00:16:05.916 "num_base_bdevs": 2, 00:16:05.916 "num_base_bdevs_discovered": 2, 00:16:05.916 "num_base_bdevs_operational": 2, 00:16:05.916 "base_bdevs_list": [ 00:16:05.916 { 00:16:05.916 "name": "spare", 00:16:05.916 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:05.916 "is_configured": true, 00:16:05.916 "data_offset": 256, 00:16:05.916 "data_size": 7936 00:16:05.916 }, 00:16:05.916 { 00:16:05.916 "name": "BaseBdev2", 00:16:05.916 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:05.916 "is_configured": true, 00:16:05.916 "data_offset": 256, 00:16:05.916 "data_size": 7936 00:16:05.916 } 00:16:05.916 ] 00:16:05.916 }' 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.916 23:55:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.175 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.175 "name": "raid_bdev1", 00:16:06.175 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:06.175 "strip_size_kb": 0, 00:16:06.175 "state": "online", 00:16:06.175 "raid_level": "raid1", 00:16:06.175 "superblock": true, 00:16:06.175 "num_base_bdevs": 2, 00:16:06.175 "num_base_bdevs_discovered": 2, 00:16:06.175 "num_base_bdevs_operational": 2, 00:16:06.175 "base_bdevs_list": [ 00:16:06.175 { 00:16:06.175 "name": "spare", 00:16:06.175 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:06.175 "is_configured": true, 00:16:06.175 "data_offset": 256, 00:16:06.175 "data_size": 7936 00:16:06.175 }, 00:16:06.175 { 00:16:06.176 "name": "BaseBdev2", 00:16:06.176 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:06.176 "is_configured": true, 00:16:06.176 "data_offset": 256, 00:16:06.176 "data_size": 7936 00:16:06.176 } 00:16:06.176 ] 00:16:06.176 }' 00:16:06.176 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.176 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.434 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.434 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.434 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.434 [2024-11-02 23:56:00.506980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.434 [2024-11-02 23:56:00.507009] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.434 [2024-11-02 23:56:00.507090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.434 [2024-11-02 23:56:00.507170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.434 [2024-11-02 23:56:00.507184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:06.434 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.434 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.434 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:06.434 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.434 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.693 [2024-11-02 23:56:00.578859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.693 [2024-11-02 23:56:00.578969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.693 [2024-11-02 23:56:00.579023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:06.693 [2024-11-02 23:56:00.579056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.693 [2024-11-02 23:56:00.580994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.693 [2024-11-02 23:56:00.581063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.693 [2024-11-02 23:56:00.581116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.693 [2024-11-02 23:56:00.581162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.693 [2024-11-02 23:56:00.581247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.693 spare 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.693 [2024-11-02 23:56:00.681144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:06.693 [2024-11-02 23:56:00.681166] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:06.693 [2024-11-02 23:56:00.681249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:06.693 [2024-11-02 23:56:00.681324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:06.693 [2024-11-02 23:56:00.681335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:06.693 [2024-11-02 23:56:00.681406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.693 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.693 "name": "raid_bdev1", 00:16:06.693 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:06.693 "strip_size_kb": 0, 00:16:06.693 "state": "online", 00:16:06.693 "raid_level": "raid1", 00:16:06.693 "superblock": true, 00:16:06.693 "num_base_bdevs": 2, 00:16:06.693 "num_base_bdevs_discovered": 2, 00:16:06.693 "num_base_bdevs_operational": 2, 00:16:06.693 "base_bdevs_list": [ 00:16:06.693 { 00:16:06.693 "name": "spare", 00:16:06.693 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:06.693 "is_configured": true, 00:16:06.693 "data_offset": 256, 00:16:06.693 "data_size": 7936 00:16:06.693 }, 00:16:06.694 { 00:16:06.694 "name": "BaseBdev2", 00:16:06.694 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:06.694 "is_configured": true, 00:16:06.694 "data_offset": 256, 00:16:06.694 "data_size": 7936 00:16:06.694 } 00:16:06.694 ] 00:16:06.694 }' 00:16:06.694 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.694 23:56:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.260 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.260 "name": "raid_bdev1", 00:16:07.260 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:07.260 "strip_size_kb": 0, 00:16:07.260 "state": "online", 00:16:07.261 "raid_level": "raid1", 00:16:07.261 "superblock": true, 00:16:07.261 "num_base_bdevs": 2, 00:16:07.261 "num_base_bdevs_discovered": 2, 00:16:07.261 "num_base_bdevs_operational": 2, 00:16:07.261 "base_bdevs_list": [ 00:16:07.261 { 00:16:07.261 "name": "spare", 00:16:07.261 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:07.261 "is_configured": true, 00:16:07.261 "data_offset": 256, 00:16:07.261 "data_size": 7936 00:16:07.261 }, 00:16:07.261 { 00:16:07.261 "name": "BaseBdev2", 00:16:07.261 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:07.261 "is_configured": true, 00:16:07.261 "data_offset": 256, 00:16:07.261 "data_size": 7936 00:16:07.261 } 00:16:07.261 ] 00:16:07.261 }' 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.261 [2024-11-02 23:56:01.313708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.261 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.519 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.519 "name": "raid_bdev1", 00:16:07.519 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:07.519 "strip_size_kb": 0, 00:16:07.519 "state": "online", 00:16:07.519 "raid_level": "raid1", 00:16:07.519 "superblock": true, 00:16:07.519 "num_base_bdevs": 2, 00:16:07.519 "num_base_bdevs_discovered": 1, 00:16:07.519 "num_base_bdevs_operational": 1, 00:16:07.519 "base_bdevs_list": [ 00:16:07.519 { 00:16:07.519 "name": null, 00:16:07.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.519 "is_configured": false, 00:16:07.519 "data_offset": 0, 00:16:07.519 "data_size": 7936 00:16:07.519 }, 00:16:07.519 { 00:16:07.519 "name": "BaseBdev2", 00:16:07.519 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:07.519 "is_configured": true, 00:16:07.519 "data_offset": 256, 00:16:07.519 "data_size": 7936 00:16:07.519 } 00:16:07.519 ] 00:16:07.519 }' 00:16:07.519 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.519 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.783 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.783 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.783 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.783 [2024-11-02 23:56:01.752953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.783 [2024-11-02 23:56:01.753156] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:07.783 [2024-11-02 23:56:01.753237] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:07.783 [2024-11-02 23:56:01.753303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.783 [2024-11-02 23:56:01.756816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:16:07.783 [2024-11-02 23:56:01.758703] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:07.783 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.783 23:56:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.742 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.742 "name": "raid_bdev1", 00:16:08.742 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:08.742 "strip_size_kb": 0, 00:16:08.742 "state": "online", 00:16:08.742 "raid_level": "raid1", 00:16:08.742 "superblock": true, 00:16:08.742 "num_base_bdevs": 2, 00:16:08.742 "num_base_bdevs_discovered": 2, 00:16:08.743 "num_base_bdevs_operational": 2, 00:16:08.743 "process": { 00:16:08.743 "type": "rebuild", 00:16:08.743 "target": "spare", 00:16:08.743 "progress": { 00:16:08.743 "blocks": 2560, 00:16:08.743 "percent": 32 00:16:08.743 } 00:16:08.743 }, 00:16:08.743 "base_bdevs_list": [ 00:16:08.743 { 00:16:08.743 "name": "spare", 00:16:08.743 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:08.743 "is_configured": true, 00:16:08.743 "data_offset": 256, 00:16:08.743 "data_size": 7936 00:16:08.743 }, 00:16:08.743 { 00:16:08.743 "name": "BaseBdev2", 00:16:08.743 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:08.743 "is_configured": true, 00:16:08.743 "data_offset": 256, 00:16:08.743 "data_size": 7936 00:16:08.743 } 00:16:08.743 ] 00:16:08.743 }' 00:16:08.743 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.001 [2024-11-02 23:56:02.921923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.001 [2024-11-02 23:56:02.962704] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.001 [2024-11-02 23:56:02.962818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.001 [2024-11-02 23:56:02.962859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.001 [2024-11-02 23:56:02.962882] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.001 23:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.001 23:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.001 "name": "raid_bdev1", 00:16:09.001 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:09.001 "strip_size_kb": 0, 00:16:09.001 "state": "online", 00:16:09.001 "raid_level": "raid1", 00:16:09.001 "superblock": true, 00:16:09.001 "num_base_bdevs": 2, 00:16:09.001 "num_base_bdevs_discovered": 1, 00:16:09.001 "num_base_bdevs_operational": 1, 00:16:09.001 "base_bdevs_list": [ 00:16:09.001 { 00:16:09.001 "name": null, 00:16:09.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.001 "is_configured": false, 00:16:09.001 "data_offset": 0, 00:16:09.001 "data_size": 7936 00:16:09.001 }, 00:16:09.001 { 00:16:09.001 "name": "BaseBdev2", 00:16:09.001 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:09.001 "is_configured": true, 00:16:09.001 "data_offset": 256, 00:16:09.001 "data_size": 7936 00:16:09.001 } 00:16:09.001 ] 00:16:09.001 }' 00:16:09.001 23:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.001 23:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.567 23:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.567 23:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.567 23:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.567 [2024-11-02 23:56:03.430068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.567 [2024-11-02 23:56:03.430178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.567 [2024-11-02 23:56:03.430238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:09.567 [2024-11-02 23:56:03.430259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.567 [2024-11-02 23:56:03.430459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.567 [2024-11-02 23:56:03.430471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.567 [2024-11-02 23:56:03.430528] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:09.567 [2024-11-02 23:56:03.430539] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:09.567 [2024-11-02 23:56:03.430550] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.567 [2024-11-02 23:56:03.430577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.567 [2024-11-02 23:56:03.434032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:16:09.567 spare 00:16:09.567 23:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.567 23:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:09.567 [2024-11-02 23:56:03.435960] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.505 "name": "raid_bdev1", 00:16:10.505 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:10.505 "strip_size_kb": 0, 00:16:10.505 "state": "online", 00:16:10.505 "raid_level": "raid1", 00:16:10.505 "superblock": true, 00:16:10.505 "num_base_bdevs": 2, 00:16:10.505 "num_base_bdevs_discovered": 2, 00:16:10.505 "num_base_bdevs_operational": 2, 00:16:10.505 "process": { 00:16:10.505 "type": "rebuild", 00:16:10.505 "target": "spare", 00:16:10.505 "progress": { 00:16:10.505 "blocks": 2560, 00:16:10.505 "percent": 32 00:16:10.505 } 00:16:10.505 }, 00:16:10.505 "base_bdevs_list": [ 00:16:10.505 { 00:16:10.505 "name": "spare", 00:16:10.505 "uuid": "8c1ccd05-6f81-54c6-ae76-9d015cb8f7c1", 00:16:10.505 "is_configured": true, 00:16:10.505 "data_offset": 256, 00:16:10.505 "data_size": 7936 00:16:10.505 }, 00:16:10.505 { 00:16:10.505 "name": "BaseBdev2", 00:16:10.505 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:10.505 "is_configured": true, 00:16:10.505 "data_offset": 256, 00:16:10.505 "data_size": 7936 00:16:10.505 } 00:16:10.505 ] 00:16:10.505 }' 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.505 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.505 [2024-11-02 23:56:04.576750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.765 [2024-11-02 23:56:04.639884] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.765 [2024-11-02 23:56:04.639985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.765 [2024-11-02 23:56:04.640016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.765 [2024-11-02 23:56:04.640038] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.765 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.765 "name": "raid_bdev1", 00:16:10.765 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:10.765 "strip_size_kb": 0, 00:16:10.766 "state": "online", 00:16:10.766 "raid_level": "raid1", 00:16:10.766 "superblock": true, 00:16:10.766 "num_base_bdevs": 2, 00:16:10.766 "num_base_bdevs_discovered": 1, 00:16:10.766 "num_base_bdevs_operational": 1, 00:16:10.766 "base_bdevs_list": [ 00:16:10.766 { 00:16:10.766 "name": null, 00:16:10.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.766 "is_configured": false, 00:16:10.766 "data_offset": 0, 00:16:10.766 "data_size": 7936 00:16:10.766 }, 00:16:10.766 { 00:16:10.766 "name": "BaseBdev2", 00:16:10.766 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:10.766 "is_configured": true, 00:16:10.766 "data_offset": 256, 00:16:10.766 "data_size": 7936 00:16:10.766 } 00:16:10.766 ] 00:16:10.766 }' 00:16:10.766 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.766 23:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.024 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.024 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.024 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.024 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.024 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.024 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.024 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.024 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.024 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.024 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.294 "name": "raid_bdev1", 00:16:11.294 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:11.294 "strip_size_kb": 0, 00:16:11.294 "state": "online", 00:16:11.294 "raid_level": "raid1", 00:16:11.294 "superblock": true, 00:16:11.294 "num_base_bdevs": 2, 00:16:11.294 "num_base_bdevs_discovered": 1, 00:16:11.294 "num_base_bdevs_operational": 1, 00:16:11.294 "base_bdevs_list": [ 00:16:11.294 { 00:16:11.294 "name": null, 00:16:11.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.294 "is_configured": false, 00:16:11.294 "data_offset": 0, 00:16:11.294 "data_size": 7936 00:16:11.294 }, 00:16:11.294 { 00:16:11.294 "name": "BaseBdev2", 00:16:11.294 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:11.294 "is_configured": true, 00:16:11.294 "data_offset": 256, 00:16:11.294 "data_size": 7936 00:16:11.294 } 00:16:11.294 ] 00:16:11.294 }' 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.294 [2024-11-02 23:56:05.222924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.294 [2024-11-02 23:56:05.222977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.294 [2024-11-02 23:56:05.223011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:11.294 [2024-11-02 23:56:05.223033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.294 [2024-11-02 23:56:05.223180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.294 [2024-11-02 23:56:05.223198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.294 [2024-11-02 23:56:05.223240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:11.294 [2024-11-02 23:56:05.223254] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.294 [2024-11-02 23:56:05.223261] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:11.294 [2024-11-02 23:56:05.223282] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:11.294 BaseBdev1 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.294 23:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.232 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.232 "name": "raid_bdev1", 00:16:12.232 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:12.232 "strip_size_kb": 0, 00:16:12.232 "state": "online", 00:16:12.232 "raid_level": "raid1", 00:16:12.232 "superblock": true, 00:16:12.232 "num_base_bdevs": 2, 00:16:12.232 "num_base_bdevs_discovered": 1, 00:16:12.232 "num_base_bdevs_operational": 1, 00:16:12.232 "base_bdevs_list": [ 00:16:12.232 { 00:16:12.232 "name": null, 00:16:12.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.232 "is_configured": false, 00:16:12.232 "data_offset": 0, 00:16:12.232 "data_size": 7936 00:16:12.232 }, 00:16:12.232 { 00:16:12.232 "name": "BaseBdev2", 00:16:12.233 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:12.233 "is_configured": true, 00:16:12.233 "data_offset": 256, 00:16:12.233 "data_size": 7936 00:16:12.233 } 00:16:12.233 ] 00:16:12.233 }' 00:16:12.233 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.233 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.802 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.802 "name": "raid_bdev1", 00:16:12.802 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:12.802 "strip_size_kb": 0, 00:16:12.802 "state": "online", 00:16:12.802 "raid_level": "raid1", 00:16:12.802 "superblock": true, 00:16:12.802 "num_base_bdevs": 2, 00:16:12.802 "num_base_bdevs_discovered": 1, 00:16:12.802 "num_base_bdevs_operational": 1, 00:16:12.802 "base_bdevs_list": [ 00:16:12.802 { 00:16:12.802 "name": null, 00:16:12.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.803 "is_configured": false, 00:16:12.803 "data_offset": 0, 00:16:12.803 "data_size": 7936 00:16:12.803 }, 00:16:12.803 { 00:16:12.803 "name": "BaseBdev2", 00:16:12.803 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:12.803 "is_configured": true, 00:16:12.803 "data_offset": 256, 00:16:12.803 "data_size": 7936 00:16:12.803 } 00:16:12.803 ] 00:16:12.803 }' 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.803 [2024-11-02 23:56:06.832236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.803 [2024-11-02 23:56:06.832434] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:12.803 [2024-11-02 23:56:06.832489] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:12.803 request: 00:16:12.803 { 00:16:12.803 "base_bdev": "BaseBdev1", 00:16:12.803 "raid_bdev": "raid_bdev1", 00:16:12.803 "method": "bdev_raid_add_base_bdev", 00:16:12.803 "req_id": 1 00:16:12.803 } 00:16:12.803 Got JSON-RPC error response 00:16:12.803 response: 00:16:12.803 { 00:16:12.803 "code": -22, 00:16:12.803 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:12.803 } 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:12.803 23:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.182 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.182 "name": "raid_bdev1", 00:16:14.182 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:14.182 "strip_size_kb": 0, 00:16:14.182 "state": "online", 00:16:14.182 "raid_level": "raid1", 00:16:14.182 "superblock": true, 00:16:14.182 "num_base_bdevs": 2, 00:16:14.182 "num_base_bdevs_discovered": 1, 00:16:14.182 "num_base_bdevs_operational": 1, 00:16:14.182 "base_bdevs_list": [ 00:16:14.182 { 00:16:14.182 "name": null, 00:16:14.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.182 "is_configured": false, 00:16:14.182 "data_offset": 0, 00:16:14.182 "data_size": 7936 00:16:14.182 }, 00:16:14.182 { 00:16:14.182 "name": "BaseBdev2", 00:16:14.183 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:14.183 "is_configured": true, 00:16:14.183 "data_offset": 256, 00:16:14.183 "data_size": 7936 00:16:14.183 } 00:16:14.183 ] 00:16:14.183 }' 00:16:14.183 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.183 23:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.443 "name": "raid_bdev1", 00:16:14.443 "uuid": "852024d5-1ea0-473b-9ec0-6b96df29ccaf", 00:16:14.443 "strip_size_kb": 0, 00:16:14.443 "state": "online", 00:16:14.443 "raid_level": "raid1", 00:16:14.443 "superblock": true, 00:16:14.443 "num_base_bdevs": 2, 00:16:14.443 "num_base_bdevs_discovered": 1, 00:16:14.443 "num_base_bdevs_operational": 1, 00:16:14.443 "base_bdevs_list": [ 00:16:14.443 { 00:16:14.443 "name": null, 00:16:14.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.443 "is_configured": false, 00:16:14.443 "data_offset": 0, 00:16:14.443 "data_size": 7936 00:16:14.443 }, 00:16:14.443 { 00:16:14.443 "name": "BaseBdev2", 00:16:14.443 "uuid": "3ac0d5e6-b99f-53fa-b051-083d408eb497", 00:16:14.443 "is_configured": true, 00:16:14.443 "data_offset": 256, 00:16:14.443 "data_size": 7936 00:16:14.443 } 00:16:14.443 ] 00:16:14.443 }' 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99130 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 99130 ']' 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 99130 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 99130 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:14.443 killing process with pid 99130 00:16:14.443 Received shutdown signal, test time was about 60.000000 seconds 00:16:14.443 00:16:14.443 Latency(us) 00:16:14.443 [2024-11-02T23:56:08.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.443 [2024-11-02T23:56:08.538Z] =================================================================================================================== 00:16:14.443 [2024-11-02T23:56:08.538Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 99130' 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 99130 00:16:14.443 [2024-11-02 23:56:08.519410] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.443 [2024-11-02 23:56:08.519523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.443 [2024-11-02 23:56:08.519571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.443 [2024-11-02 23:56:08.519579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:14.443 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 99130 00:16:14.703 [2024-11-02 23:56:08.552471] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.703 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:14.703 00:16:14.703 real 0m16.308s 00:16:14.703 user 0m21.847s 00:16:14.703 sys 0m1.736s 00:16:14.703 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.703 23:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.703 ************************************ 00:16:14.703 END TEST raid_rebuild_test_sb_md_interleaved 00:16:14.703 ************************************ 00:16:14.963 23:56:08 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:14.963 23:56:08 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:14.963 23:56:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99130 ']' 00:16:14.963 23:56:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99130 00:16:14.963 23:56:08 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:14.963 ************************************ 00:16:14.963 END TEST bdev_raid 00:16:14.963 ************************************ 00:16:14.963 00:16:14.963 real 9m57.442s 00:16:14.963 user 14m10.799s 00:16:14.963 sys 1m51.093s 00:16:14.963 23:56:08 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.963 23:56:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.963 23:56:08 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:14.963 23:56:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:14.963 23:56:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.963 23:56:08 -- common/autotest_common.sh@10 -- # set +x 00:16:14.963 ************************************ 00:16:14.963 START TEST spdkcli_raid 00:16:14.963 ************************************ 00:16:14.963 23:56:08 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:14.963 * Looking for test storage... 00:16:14.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:14.963 23:56:09 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:15.223 23:56:09 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:16:15.223 23:56:09 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:15.223 23:56:09 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:15.223 23:56:09 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.223 23:56:09 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.223 23:56:09 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.223 23:56:09 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.223 23:56:09 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.223 23:56:09 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.223 23:56:09 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.224 23:56:09 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:15.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.224 --rc genhtml_branch_coverage=1 00:16:15.224 --rc genhtml_function_coverage=1 00:16:15.224 --rc genhtml_legend=1 00:16:15.224 --rc geninfo_all_blocks=1 00:16:15.224 --rc geninfo_unexecuted_blocks=1 00:16:15.224 00:16:15.224 ' 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:15.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.224 --rc genhtml_branch_coverage=1 00:16:15.224 --rc genhtml_function_coverage=1 00:16:15.224 --rc genhtml_legend=1 00:16:15.224 --rc geninfo_all_blocks=1 00:16:15.224 --rc geninfo_unexecuted_blocks=1 00:16:15.224 00:16:15.224 ' 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:15.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.224 --rc genhtml_branch_coverage=1 00:16:15.224 --rc genhtml_function_coverage=1 00:16:15.224 --rc genhtml_legend=1 00:16:15.224 --rc geninfo_all_blocks=1 00:16:15.224 --rc geninfo_unexecuted_blocks=1 00:16:15.224 00:16:15.224 ' 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:15.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.224 --rc genhtml_branch_coverage=1 00:16:15.224 --rc genhtml_function_coverage=1 00:16:15.224 --rc genhtml_legend=1 00:16:15.224 --rc geninfo_all_blocks=1 00:16:15.224 --rc geninfo_unexecuted_blocks=1 00:16:15.224 00:16:15.224 ' 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:15.224 23:56:09 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=99799 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:15.224 23:56:09 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 99799 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 99799 ']' 00:16:15.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:15.224 23:56:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.224 [2024-11-02 23:56:09.279489] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:16:15.224 [2024-11-02 23:56:09.279603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99799 ] 00:16:15.484 [2024-11-02 23:56:09.436126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:15.484 [2024-11-02 23:56:09.464240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.485 [2024-11-02 23:56:09.464331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.054 23:56:10 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:16.054 23:56:10 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:16:16.054 23:56:10 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:16.054 23:56:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:16.054 23:56:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.054 23:56:10 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:16.054 23:56:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:16.055 23:56:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.055 23:56:10 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:16.055 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:16.055 ' 00:16:17.965 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:17.965 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:17.965 23:56:11 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:17.965 23:56:11 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.965 23:56:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.965 23:56:11 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:17.965 23:56:11 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:17.965 23:56:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.965 23:56:11 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:17.965 ' 00:16:18.904 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:18.904 23:56:12 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:18.904 23:56:12 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:18.904 23:56:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.163 23:56:13 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:19.163 23:56:13 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.163 23:56:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.163 23:56:13 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:19.163 23:56:13 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:19.423 23:56:13 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:19.681 23:56:13 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:19.681 23:56:13 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:19.682 23:56:13 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:19.682 23:56:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.682 23:56:13 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:19.682 23:56:13 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.682 23:56:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.682 23:56:13 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:19.682 ' 00:16:20.620 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:20.620 23:56:14 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:20.620 23:56:14 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.620 23:56:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.880 23:56:14 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:20.880 23:56:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:20.880 23:56:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.880 23:56:14 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:20.880 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:20.880 ' 00:16:22.259 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:22.259 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:22.259 23:56:16 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.259 23:56:16 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 99799 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 99799 ']' 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 99799 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 99799 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 99799' 00:16:22.259 killing process with pid 99799 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 99799 00:16:22.259 23:56:16 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 99799 00:16:22.829 23:56:16 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:22.829 23:56:16 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 99799 ']' 00:16:22.829 23:56:16 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 99799 00:16:22.829 23:56:16 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 99799 ']' 00:16:22.829 23:56:16 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 99799 00:16:22.829 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (99799) - No such process 00:16:22.829 23:56:16 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 99799 is not found' 00:16:22.829 Process with pid 99799 is not found 00:16:22.829 23:56:16 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:22.829 23:56:16 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:22.829 23:56:16 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:22.829 23:56:16 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:22.829 00:16:22.829 real 0m7.749s 00:16:22.829 user 0m16.399s 00:16:22.829 sys 0m1.126s 00:16:22.829 23:56:16 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:22.829 ************************************ 00:16:22.829 END TEST spdkcli_raid 00:16:22.829 ************************************ 00:16:22.829 23:56:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.829 23:56:16 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:22.829 23:56:16 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:22.829 23:56:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:22.829 23:56:16 -- common/autotest_common.sh@10 -- # set +x 00:16:22.829 ************************************ 00:16:22.829 START TEST blockdev_raid5f 00:16:22.829 ************************************ 00:16:22.829 23:56:16 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:22.829 * Looking for test storage... 00:16:22.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:22.829 23:56:16 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:22.829 23:56:16 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:16:22.829 23:56:16 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:23.089 23:56:16 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:23.089 23:56:16 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.089 23:56:16 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.089 23:56:16 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.089 23:56:16 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.089 23:56:16 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.089 23:56:16 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.090 23:56:16 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:23.090 23:56:16 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.090 23:56:16 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.090 --rc genhtml_branch_coverage=1 00:16:23.090 --rc genhtml_function_coverage=1 00:16:23.090 --rc genhtml_legend=1 00:16:23.090 --rc geninfo_all_blocks=1 00:16:23.090 --rc geninfo_unexecuted_blocks=1 00:16:23.090 00:16:23.090 ' 00:16:23.090 23:56:16 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.090 --rc genhtml_branch_coverage=1 00:16:23.090 --rc genhtml_function_coverage=1 00:16:23.090 --rc genhtml_legend=1 00:16:23.090 --rc geninfo_all_blocks=1 00:16:23.090 --rc geninfo_unexecuted_blocks=1 00:16:23.090 00:16:23.090 ' 00:16:23.090 23:56:16 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.090 --rc genhtml_branch_coverage=1 00:16:23.090 --rc genhtml_function_coverage=1 00:16:23.090 --rc genhtml_legend=1 00:16:23.090 --rc geninfo_all_blocks=1 00:16:23.090 --rc geninfo_unexecuted_blocks=1 00:16:23.090 00:16:23.090 ' 00:16:23.090 23:56:16 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.090 --rc genhtml_branch_coverage=1 00:16:23.090 --rc genhtml_function_coverage=1 00:16:23.090 --rc genhtml_legend=1 00:16:23.090 --rc geninfo_all_blocks=1 00:16:23.090 --rc geninfo_unexecuted_blocks=1 00:16:23.090 00:16:23.090 ' 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:23.090 23:56:16 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100060 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:23.090 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100060 00:16:23.090 23:56:17 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 100060 ']' 00:16:23.090 23:56:17 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.090 23:56:17 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:23.090 23:56:17 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.090 23:56:17 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:23.090 23:56:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.090 [2024-11-02 23:56:17.108167] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:16:23.090 [2024-11-02 23:56:17.108404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100060 ] 00:16:23.350 [2024-11-02 23:56:17.264261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.350 [2024-11-02 23:56:17.288981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.919 23:56:17 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:23.919 23:56:17 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:16:23.919 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:23.919 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:23.919 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:23.919 23:56:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.919 23:56:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.919 Malloc0 00:16:23.919 Malloc1 00:16:23.919 Malloc2 00:16:23.919 23:56:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.919 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:23.919 23:56:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.919 23:56:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.919 23:56:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.920 23:56:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:23.920 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:23.920 23:56:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.920 23:56:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.179 23:56:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.179 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:24.179 23:56:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.179 23:56:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.179 23:56:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3ec9c2e8-39ab-4e03-a822-f56e0766ac95"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3ec9c2e8-39ab-4e03-a822-f56e0766ac95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3ec9c2e8-39ab-4e03-a822-f56e0766ac95",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f0eb3de8-0998-496f-aa25-5cb51b85176a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3c8a8384-42ee-481f-b7a0-08fb87aa8bdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b55c4679-dd3b-4bc0-a70f-ad0a5d816138",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:24.180 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100060 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 100060 ']' 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 100060 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 100060 00:16:24.180 killing process with pid 100060 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 100060' 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 100060 00:16:24.180 23:56:18 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 100060 00:16:24.751 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:24.751 23:56:18 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:24.751 23:56:18 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:24.751 23:56:18 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:24.751 23:56:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.751 ************************************ 00:16:24.751 START TEST bdev_hello_world 00:16:24.751 ************************************ 00:16:24.751 23:56:18 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:24.751 [2024-11-02 23:56:18.709195] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:16:24.751 [2024-11-02 23:56:18.709313] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100094 ] 00:16:25.011 [2024-11-02 23:56:18.866307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.011 [2024-11-02 23:56:18.891525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.011 [2024-11-02 23:56:19.066492] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:25.011 [2024-11-02 23:56:19.066618] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:25.011 [2024-11-02 23:56:19.066650] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:25.011 [2024-11-02 23:56:19.067024] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:25.011 [2024-11-02 23:56:19.067204] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:25.011 [2024-11-02 23:56:19.067262] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:25.011 [2024-11-02 23:56:19.067340] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:25.011 00:16:25.011 [2024-11-02 23:56:19.067396] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:25.271 00:16:25.271 ************************************ 00:16:25.271 END TEST bdev_hello_world 00:16:25.271 ************************************ 00:16:25.271 real 0m0.663s 00:16:25.271 user 0m0.367s 00:16:25.271 sys 0m0.190s 00:16:25.271 23:56:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:25.271 23:56:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:25.271 23:56:19 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:25.271 23:56:19 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:25.271 23:56:19 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:25.271 23:56:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 ************************************ 00:16:25.530 START TEST bdev_bounds 00:16:25.530 ************************************ 00:16:25.530 23:56:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100125 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100125' 00:16:25.531 Process bdevio pid: 100125 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100125 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 100125 ']' 00:16:25.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:25.531 23:56:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:25.531 [2024-11-02 23:56:19.453529] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:16:25.531 [2024-11-02 23:56:19.453736] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100125 ] 00:16:25.531 [2024-11-02 23:56:19.611885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.789 [2024-11-02 23:56:19.641832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.789 [2024-11-02 23:56:19.641883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.789 [2024-11-02 23:56:19.642003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.357 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:26.357 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:16:26.357 23:56:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:26.357 I/O targets: 00:16:26.357 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:26.357 00:16:26.357 00:16:26.357 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.357 http://cunit.sourceforge.net/ 00:16:26.357 00:16:26.357 00:16:26.357 Suite: bdevio tests on: raid5f 00:16:26.357 Test: blockdev write read block ...passed 00:16:26.357 Test: blockdev write zeroes read block ...passed 00:16:26.357 Test: blockdev write zeroes read no split ...passed 00:16:26.357 Test: blockdev write zeroes read split ...passed 00:16:26.616 Test: blockdev write zeroes read split partial ...passed 00:16:26.616 Test: blockdev reset ...passed 00:16:26.616 Test: blockdev write read 8 blocks ...passed 00:16:26.616 Test: blockdev write read size > 128k ...passed 00:16:26.616 Test: blockdev write read invalid size ...passed 00:16:26.616 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:26.616 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:26.616 Test: blockdev write read max offset ...passed 00:16:26.616 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:26.616 Test: blockdev writev readv 8 blocks ...passed 00:16:26.616 Test: blockdev writev readv 30 x 1block ...passed 00:16:26.616 Test: blockdev writev readv block ...passed 00:16:26.616 Test: blockdev writev readv size > 128k ...passed 00:16:26.616 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:26.616 Test: blockdev comparev and writev ...passed 00:16:26.616 Test: blockdev nvme passthru rw ...passed 00:16:26.616 Test: blockdev nvme passthru vendor specific ...passed 00:16:26.616 Test: blockdev nvme admin passthru ...passed 00:16:26.616 Test: blockdev copy ...passed 00:16:26.616 00:16:26.616 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.616 suites 1 1 n/a 0 0 00:16:26.616 tests 23 23 23 0 0 00:16:26.616 asserts 130 130 130 0 n/a 00:16:26.616 00:16:26.616 Elapsed time = 0.310 seconds 00:16:26.616 0 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100125 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 100125 ']' 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 100125 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 100125 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 100125' 00:16:26.616 killing process with pid 100125 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 100125 00:16:26.616 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 100125 00:16:26.875 23:56:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:26.875 00:16:26.875 real 0m1.407s 00:16:26.875 user 0m3.399s 00:16:26.875 sys 0m0.331s 00:16:26.875 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:26.875 23:56:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:26.875 ************************************ 00:16:26.875 END TEST bdev_bounds 00:16:26.875 ************************************ 00:16:26.875 23:56:20 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:26.875 23:56:20 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:26.875 23:56:20 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:26.875 23:56:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:26.875 ************************************ 00:16:26.875 START TEST bdev_nbd 00:16:26.875 ************************************ 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100174 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100174 /var/tmp/spdk-nbd.sock 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 100174 ']' 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:26.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:26.875 23:56:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:26.875 [2024-11-02 23:56:20.952058] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:16:26.875 [2024-11-02 23:56:20.952276] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.134 [2024-11-02 23:56:21.087513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.134 [2024-11-02 23:56:21.112066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:27.702 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:27.961 23:56:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.961 1+0 records in 00:16:27.961 1+0 records out 00:16:27.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420372 s, 9.7 MB/s 00:16:27.962 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.962 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:16:27.962 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.962 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:27.962 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:16:27.962 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:27.962 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:27.962 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:28.221 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:28.221 { 00:16:28.221 "nbd_device": "/dev/nbd0", 00:16:28.222 "bdev_name": "raid5f" 00:16:28.222 } 00:16:28.222 ]' 00:16:28.222 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:28.222 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:28.222 { 00:16:28.222 "nbd_device": "/dev/nbd0", 00:16:28.222 "bdev_name": "raid5f" 00:16:28.222 } 00:16:28.222 ]' 00:16:28.222 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:28.222 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:28.222 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.222 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:28.222 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:28.222 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:28.222 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.222 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.480 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:28.738 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.739 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:28.997 /dev/nbd0 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.997 1+0 records in 00:16:28.997 1+0 records out 00:16:28.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394809 s, 10.4 MB/s 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.997 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:16:28.998 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.998 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:28.998 23:56:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:16:28.998 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.998 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.998 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:28.998 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.998 23:56:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:29.257 { 00:16:29.257 "nbd_device": "/dev/nbd0", 00:16:29.257 "bdev_name": "raid5f" 00:16:29.257 } 00:16:29.257 ]' 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:29.257 { 00:16:29.257 "nbd_device": "/dev/nbd0", 00:16:29.257 "bdev_name": "raid5f" 00:16:29.257 } 00:16:29.257 ]' 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:29.257 256+0 records in 00:16:29.257 256+0 records out 00:16:29.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135424 s, 77.4 MB/s 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:29.257 256+0 records in 00:16:29.257 256+0 records out 00:16:29.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320677 s, 32.7 MB/s 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:29.257 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:29.516 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:29.516 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:29.516 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:29.516 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:29.516 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:29.516 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:29.516 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:29.516 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:29.516 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:29.516 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.517 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:29.775 23:56:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:30.034 malloc_lvol_verify 00:16:30.034 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:30.293 fafe074a-1f0c-40e5-921c-f50dd84fee35 00:16:30.293 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:30.553 4e4d846c-16a0-45c9-b84c-b699a8e75329 00:16:30.553 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:30.553 /dev/nbd0 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:30.812 mke2fs 1.47.0 (5-Feb-2023) 00:16:30.812 Discarding device blocks: 0/4096 done 00:16:30.812 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:30.812 00:16:30.812 Allocating group tables: 0/1 done 00:16:30.812 Writing inode tables: 0/1 done 00:16:30.812 Creating journal (1024 blocks): done 00:16:30.812 Writing superblocks and filesystem accounting information: 0/1 done 00:16:30.812 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.812 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100174 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 100174 ']' 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 100174 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:31.206 23:56:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 100174 00:16:31.207 23:56:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:31.207 23:56:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:31.207 23:56:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 100174' 00:16:31.207 killing process with pid 100174 00:16:31.207 23:56:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 100174 00:16:31.207 23:56:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 100174 00:16:31.207 23:56:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:31.207 00:16:31.207 real 0m4.402s 00:16:31.207 user 0m6.387s 00:16:31.207 sys 0m1.314s 00:16:31.207 23:56:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:31.207 ************************************ 00:16:31.207 END TEST bdev_nbd 00:16:31.207 ************************************ 00:16:31.207 23:56:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:31.472 23:56:25 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:31.473 23:56:25 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:16:31.473 23:56:25 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:16:31.473 23:56:25 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:31.473 23:56:25 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:31.473 23:56:25 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:31.473 23:56:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:31.473 ************************************ 00:16:31.473 START TEST bdev_fio 00:16:31.473 ************************************ 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:16:31.473 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:31.473 ************************************ 00:16:31.473 START TEST bdev_fio_rw_verify 00:16:31.473 ************************************ 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:31.473 23:56:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:31.733 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:31.733 fio-3.35 00:16:31.733 Starting 1 thread 00:16:43.945 00:16:43.945 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100361: Sat Nov 2 23:56:36 2024 00:16:43.945 read: IOPS=12.7k, BW=49.8MiB/s (52.2MB/s)(498MiB/10001msec) 00:16:43.945 slat (nsec): min=16906, max=58769, avg=18333.60, stdev=1800.53 00:16:43.945 clat (usec): min=10, max=315, avg=127.23, stdev=43.41 00:16:43.945 lat (usec): min=29, max=339, avg=145.57, stdev=43.61 00:16:43.945 clat percentiles (usec): 00:16:43.945 | 50.000th=[ 130], 99.000th=[ 208], 99.900th=[ 237], 99.990th=[ 273], 00:16:43.945 | 99.999th=[ 306] 00:16:43.945 write: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(514MiB/9876msec); 0 zone resets 00:16:43.945 slat (usec): min=7, max=244, avg=15.88, stdev= 3.58 00:16:43.945 clat (usec): min=56, max=1754, avg=288.70, stdev=41.86 00:16:43.945 lat (usec): min=71, max=1998, avg=304.58, stdev=43.04 00:16:43.945 clat percentiles (usec): 00:16:43.945 | 50.000th=[ 293], 99.000th=[ 367], 99.900th=[ 594], 99.990th=[ 1319], 00:16:43.945 | 99.999th=[ 1696] 00:16:43.945 bw ( KiB/s): min=50304, max=55080, per=98.99%, avg=52712.58, stdev=1682.61, samples=19 00:16:43.945 iops : min=12576, max=13770, avg=13178.05, stdev=420.63, samples=19 00:16:43.945 lat (usec) : 20=0.01%, 50=0.01%, 100=16.84%, 250=40.38%, 500=42.70% 00:16:43.945 lat (usec) : 750=0.04%, 1000=0.02% 00:16:43.945 lat (msec) : 2=0.01% 00:16:43.945 cpu : usr=99.00%, sys=0.32%, ctx=22, majf=0, minf=13452 00:16:43.945 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.945 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.945 issued rwts: total=127499,131477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.945 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:43.945 00:16:43.945 Run status group 0 (all jobs): 00:16:43.945 READ: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=498MiB (522MB), run=10001-10001msec 00:16:43.945 WRITE: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=514MiB (539MB), run=9876-9876msec 00:16:43.945 ----------------------------------------------------- 00:16:43.945 Suppressions used: 00:16:43.945 count bytes template 00:16:43.945 1 7 /usr/src/fio/parse.c 00:16:43.945 35 3360 /usr/src/fio/iolog.c 00:16:43.945 1 8 libtcmalloc_minimal.so 00:16:43.945 1 904 libcrypto.so 00:16:43.945 ----------------------------------------------------- 00:16:43.945 00:16:43.945 00:16:43.945 real 0m11.326s 00:16:43.945 user 0m11.561s 00:16:43.945 sys 0m0.648s 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:43.945 ************************************ 00:16:43.945 END TEST bdev_fio_rw_verify 00:16:43.945 ************************************ 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:16:43.945 23:56:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:43.946 23:56:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3ec9c2e8-39ab-4e03-a822-f56e0766ac95"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3ec9c2e8-39ab-4e03-a822-f56e0766ac95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3ec9c2e8-39ab-4e03-a822-f56e0766ac95",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f0eb3de8-0998-496f-aa25-5cb51b85176a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3c8a8384-42ee-481f-b7a0-08fb87aa8bdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b55c4679-dd3b-4bc0-a70f-ad0a5d816138",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:43.946 23:56:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:43.946 23:56:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:43.946 23:56:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:43.946 /home/vagrant/spdk_repo/spdk 00:16:43.946 23:56:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:43.946 23:56:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:43.946 00:16:43.946 real 0m11.636s 00:16:43.946 user 0m11.697s 00:16:43.946 sys 0m0.792s 00:16:43.946 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:43.946 23:56:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:43.946 ************************************ 00:16:43.946 END TEST bdev_fio 00:16:43.946 ************************************ 00:16:43.946 23:56:37 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:43.946 23:56:37 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:43.946 23:56:37 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:16:43.946 23:56:37 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:43.946 23:56:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:43.946 ************************************ 00:16:43.946 START TEST bdev_verify 00:16:43.946 ************************************ 00:16:43.946 23:56:37 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:43.946 [2024-11-02 23:56:37.133958] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:16:43.946 [2024-11-02 23:56:37.134078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100513 ] 00:16:43.946 [2024-11-02 23:56:37.289907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:43.946 [2024-11-02 23:56:37.340293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.946 [2024-11-02 23:56:37.340383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.946 Running I/O for 5 seconds... 00:16:45.820 11175.00 IOPS, 43.65 MiB/s [2024-11-02T23:56:40.850Z] 11185.00 IOPS, 43.69 MiB/s [2024-11-02T23:56:41.785Z] 11223.33 IOPS, 43.84 MiB/s [2024-11-02T23:56:42.722Z] 11268.75 IOPS, 44.02 MiB/s [2024-11-02T23:56:42.722Z] 11276.60 IOPS, 44.05 MiB/s 00:16:48.627 Latency(us) 00:16:48.627 [2024-11-02T23:56:42.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.627 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:48.627 Verification LBA range: start 0x0 length 0x2000 00:16:48.627 raid5f : 5.02 6795.58 26.55 0.00 0.00 28289.66 236.10 20376.26 00:16:48.627 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:48.627 Verification LBA range: start 0x2000 length 0x2000 00:16:48.627 raid5f : 5.03 4477.43 17.49 0.00 0.00 42839.02 232.52 30678.86 00:16:48.627 [2024-11-02T23:56:42.722Z] =================================================================================================================== 00:16:48.627 [2024-11-02T23:56:42.722Z] Total : 11273.01 44.04 0.00 0.00 34070.25 232.52 30678.86 00:16:49.207 00:16:49.207 real 0m5.934s 00:16:49.207 user 0m10.973s 00:16:49.207 sys 0m0.314s 00:16:49.207 23:56:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:49.207 23:56:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:49.207 ************************************ 00:16:49.207 END TEST bdev_verify 00:16:49.207 ************************************ 00:16:49.207 23:56:43 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:49.207 23:56:43 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:16:49.207 23:56:43 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:49.207 23:56:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:49.207 ************************************ 00:16:49.207 START TEST bdev_verify_big_io 00:16:49.207 ************************************ 00:16:49.207 23:56:43 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:49.207 [2024-11-02 23:56:43.158192] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:16:49.207 [2024-11-02 23:56:43.158343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100597 ] 00:16:49.470 [2024-11-02 23:56:43.321776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:49.470 [2024-11-02 23:56:43.374647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.470 [2024-11-02 23:56:43.374796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.729 Running I/O for 5 seconds... 00:16:51.602 633.00 IOPS, 39.56 MiB/s [2024-11-02T23:56:47.075Z] 761.00 IOPS, 47.56 MiB/s [2024-11-02T23:56:48.013Z] 803.00 IOPS, 50.19 MiB/s [2024-11-02T23:56:48.950Z] 793.25 IOPS, 49.58 MiB/s [2024-11-02T23:56:48.950Z] 812.40 IOPS, 50.77 MiB/s 00:16:54.855 Latency(us) 00:16:54.855 [2024-11-02T23:56:48.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.855 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:54.855 Verification LBA range: start 0x0 length 0x200 00:16:54.855 raid5f : 5.19 465.34 29.08 0.00 0.00 6872771.59 184.23 304041.25 00:16:54.855 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:54.855 Verification LBA range: start 0x200 length 0x200 00:16:54.855 raid5f : 5.26 362.06 22.63 0.00 0.00 8745835.93 191.39 373641.06 00:16:54.855 [2024-11-02T23:56:48.950Z] =================================================================================================================== 00:16:54.855 [2024-11-02T23:56:48.950Z] Total : 827.39 51.71 0.00 0.00 7699123.50 184.23 373641.06 00:16:55.423 00:16:55.423 real 0m6.190s 00:16:55.423 user 0m11.443s 00:16:55.423 sys 0m0.330s 00:16:55.423 23:56:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:55.423 23:56:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.423 ************************************ 00:16:55.423 END TEST bdev_verify_big_io 00:16:55.423 ************************************ 00:16:55.423 23:56:49 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:55.423 23:56:49 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:55.423 23:56:49 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:55.423 23:56:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:55.423 ************************************ 00:16:55.423 START TEST bdev_write_zeroes 00:16:55.423 ************************************ 00:16:55.423 23:56:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:55.423 [2024-11-02 23:56:49.405289] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:16:55.423 [2024-11-02 23:56:49.405475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100683 ] 00:16:55.682 [2024-11-02 23:56:49.559772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.682 [2024-11-02 23:56:49.602255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.942 Running I/O for 1 seconds... 00:16:56.880 29031.00 IOPS, 113.40 MiB/s 00:16:56.880 Latency(us) 00:16:56.880 [2024-11-02T23:56:50.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.880 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:56.880 raid5f : 1.01 29000.19 113.28 0.00 0.00 4400.82 1438.07 5924.00 00:16:56.880 [2024-11-02T23:56:50.975Z] =================================================================================================================== 00:16:56.880 [2024-11-02T23:56:50.975Z] Total : 29000.19 113.28 0.00 0.00 4400.82 1438.07 5924.00 00:16:57.140 00:16:57.140 real 0m1.906s 00:16:57.140 user 0m1.494s 00:16:57.140 sys 0m0.298s 00:16:57.140 23:56:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:57.140 23:56:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:57.140 ************************************ 00:16:57.140 END TEST bdev_write_zeroes 00:16:57.140 ************************************ 00:16:57.400 23:56:51 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:57.400 23:56:51 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:57.400 23:56:51 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:57.400 23:56:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:57.400 ************************************ 00:16:57.400 START TEST bdev_json_nonenclosed 00:16:57.400 ************************************ 00:16:57.400 23:56:51 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:57.400 [2024-11-02 23:56:51.394401] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:16:57.400 [2024-11-02 23:56:51.394588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100721 ] 00:16:57.659 [2024-11-02 23:56:51.549021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.659 [2024-11-02 23:56:51.602774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.659 [2024-11-02 23:56:51.603018] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:57.659 [2024-11-02 23:56:51.603132] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:57.660 [2024-11-02 23:56:51.603206] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:57.660 00:16:57.660 real 0m0.405s 00:16:57.660 user 0m0.170s 00:16:57.660 sys 0m0.131s 00:16:57.660 23:56:51 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:57.660 23:56:51 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:57.660 ************************************ 00:16:57.660 END TEST bdev_json_nonenclosed 00:16:57.660 ************************************ 00:16:57.920 23:56:51 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:57.920 23:56:51 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:57.920 23:56:51 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:57.920 23:56:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:57.920 ************************************ 00:16:57.920 START TEST bdev_json_nonarray 00:16:57.920 ************************************ 00:16:57.920 23:56:51 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:57.920 [2024-11-02 23:56:51.871476] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 22.11.4 initialization... 00:16:57.920 [2024-11-02 23:56:51.871692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100752 ] 00:16:58.179 [2024-11-02 23:56:52.028657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.179 [2024-11-02 23:56:52.071321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.179 [2024-11-02 23:56:52.071564] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:58.179 [2024-11-02 23:56:52.071599] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:58.179 [2024-11-02 23:56:52.071618] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:58.179 ************************************ 00:16:58.179 END TEST bdev_json_nonarray 00:16:58.179 ************************************ 00:16:58.179 00:16:58.179 real 0m0.392s 00:16:58.179 user 0m0.166s 00:16:58.179 sys 0m0.121s 00:16:58.179 23:56:52 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:58.179 23:56:52 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:58.179 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:16:58.179 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:16:58.179 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:16:58.179 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:58.179 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:16:58.179 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:58.179 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:58.179 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:16:58.179 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:16:58.179 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:16:58.180 23:56:52 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:16:58.180 00:16:58.180 real 0m35.497s 00:16:58.180 user 0m48.102s 00:16:58.180 sys 0m4.905s 00:16:58.180 23:56:52 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:58.180 23:56:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:58.180 ************************************ 00:16:58.180 END TEST blockdev_raid5f 00:16:58.180 ************************************ 00:16:58.440 23:56:52 -- spdk/autotest.sh@194 -- # uname -s 00:16:58.440 23:56:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:58.440 23:56:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:58.440 23:56:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:58.440 23:56:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:58.440 23:56:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:58.440 23:56:52 -- common/autotest_common.sh@10 -- # set +x 00:16:58.440 23:56:52 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:16:58.440 23:56:52 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:16:58.440 23:56:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:16:58.440 23:56:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:16:58.440 23:56:52 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:16:58.440 23:56:52 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:16:58.440 23:56:52 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:16:58.440 23:56:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:58.440 23:56:52 -- common/autotest_common.sh@10 -- # set +x 00:16:58.440 23:56:52 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:16:58.440 23:56:52 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:16:58.440 23:56:52 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:16:58.440 23:56:52 -- common/autotest_common.sh@10 -- # set +x 00:17:00.981 INFO: APP EXITING 00:17:00.981 INFO: killing all VMs 00:17:00.981 INFO: killing vhost app 00:17:00.981 INFO: EXIT DONE 00:17:01.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:01.243 Waiting for block devices as requested 00:17:01.503 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:01.503 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.443 Cleaning 00:17:02.443 Removing: /var/run/dpdk/spdk0/config 00:17:02.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:02.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:02.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:02.443 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:02.443 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:02.443 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:02.443 Removing: /dev/shm/spdk_tgt_trace.pid68920 00:17:02.443 Removing: /var/run/dpdk/spdk0 00:17:02.443 Removing: /var/run/dpdk/spdk_pid100060 00:17:02.443 Removing: /var/run/dpdk/spdk_pid100094 00:17:02.443 Removing: /var/run/dpdk/spdk_pid100125 00:17:02.443 Removing: /var/run/dpdk/spdk_pid100350 00:17:02.443 Removing: /var/run/dpdk/spdk_pid100513 00:17:02.443 Removing: /var/run/dpdk/spdk_pid100597 00:17:02.443 Removing: /var/run/dpdk/spdk_pid100683 00:17:02.443 Removing: /var/run/dpdk/spdk_pid100721 00:17:02.702 Removing: /var/run/dpdk/spdk_pid100752 00:17:02.702 Removing: /var/run/dpdk/spdk_pid68756 00:17:02.702 Removing: /var/run/dpdk/spdk_pid68920 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69121 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69209 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69237 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69349 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69367 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69549 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69623 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69708 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69808 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69894 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69928 00:17:02.702 Removing: /var/run/dpdk/spdk_pid69970 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70035 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70136 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70561 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70614 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70668 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70684 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70754 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70770 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70839 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70855 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70897 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70915 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70957 00:17:02.702 Removing: /var/run/dpdk/spdk_pid70975 00:17:02.702 Removing: /var/run/dpdk/spdk_pid71113 00:17:02.702 Removing: /var/run/dpdk/spdk_pid71144 00:17:02.702 Removing: /var/run/dpdk/spdk_pid71230 00:17:02.702 Removing: /var/run/dpdk/spdk_pid72392 00:17:02.702 Removing: /var/run/dpdk/spdk_pid72598 00:17:02.702 Removing: /var/run/dpdk/spdk_pid72727 00:17:02.702 Removing: /var/run/dpdk/spdk_pid73326 00:17:02.702 Removing: /var/run/dpdk/spdk_pid73527 00:17:02.702 Removing: /var/run/dpdk/spdk_pid73656 00:17:02.702 Removing: /var/run/dpdk/spdk_pid74266 00:17:02.702 Removing: /var/run/dpdk/spdk_pid74585 00:17:02.702 Removing: /var/run/dpdk/spdk_pid74714 00:17:02.703 Removing: /var/run/dpdk/spdk_pid76055 00:17:02.703 Removing: /var/run/dpdk/spdk_pid76298 00:17:02.703 Removing: /var/run/dpdk/spdk_pid76427 00:17:02.703 Removing: /var/run/dpdk/spdk_pid77768 00:17:02.703 Removing: /var/run/dpdk/spdk_pid78010 00:17:02.703 Removing: /var/run/dpdk/spdk_pid78139 00:17:02.703 Removing: /var/run/dpdk/spdk_pid79479 00:17:02.703 Removing: /var/run/dpdk/spdk_pid79909 00:17:02.703 Removing: /var/run/dpdk/spdk_pid80039 00:17:02.703 Removing: /var/run/dpdk/spdk_pid81479 00:17:02.703 Removing: /var/run/dpdk/spdk_pid81728 00:17:02.703 Removing: /var/run/dpdk/spdk_pid81857 00:17:02.963 Removing: /var/run/dpdk/spdk_pid83293 00:17:02.963 Removing: /var/run/dpdk/spdk_pid83541 00:17:02.963 Removing: /var/run/dpdk/spdk_pid83671 00:17:02.963 Removing: /var/run/dpdk/spdk_pid85101 00:17:02.963 Removing: /var/run/dpdk/spdk_pid85578 00:17:02.963 Removing: /var/run/dpdk/spdk_pid85716 00:17:02.963 Removing: /var/run/dpdk/spdk_pid85843 00:17:02.963 Removing: /var/run/dpdk/spdk_pid86238 00:17:02.963 Removing: /var/run/dpdk/spdk_pid86956 00:17:02.963 Removing: /var/run/dpdk/spdk_pid87323 00:17:02.963 Removing: /var/run/dpdk/spdk_pid87995 00:17:02.963 Removing: /var/run/dpdk/spdk_pid88420 00:17:02.963 Removing: /var/run/dpdk/spdk_pid89150 00:17:02.963 Removing: /var/run/dpdk/spdk_pid89548 00:17:02.963 Removing: /var/run/dpdk/spdk_pid91458 00:17:02.963 Removing: /var/run/dpdk/spdk_pid91885 00:17:02.963 Removing: /var/run/dpdk/spdk_pid92310 00:17:02.963 Removing: /var/run/dpdk/spdk_pid94337 00:17:02.963 Removing: /var/run/dpdk/spdk_pid94806 00:17:02.963 Removing: /var/run/dpdk/spdk_pid95295 00:17:02.963 Removing: /var/run/dpdk/spdk_pid96338 00:17:02.963 Removing: /var/run/dpdk/spdk_pid96649 00:17:02.963 Removing: /var/run/dpdk/spdk_pid97583 00:17:02.963 Removing: /var/run/dpdk/spdk_pid97899 00:17:02.963 Removing: /var/run/dpdk/spdk_pid98817 00:17:02.963 Removing: /var/run/dpdk/spdk_pid99130 00:17:02.963 Removing: /var/run/dpdk/spdk_pid99799 00:17:02.963 Clean 00:17:02.963 23:56:56 -- common/autotest_common.sh@1451 -- # return 0 00:17:02.963 23:56:56 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:02.963 23:56:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.963 23:56:56 -- common/autotest_common.sh@10 -- # set +x 00:17:03.222 23:56:57 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:03.222 23:56:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:03.222 23:56:57 -- common/autotest_common.sh@10 -- # set +x 00:17:03.222 23:56:57 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:03.222 23:56:57 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:03.222 23:56:57 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:03.222 23:56:57 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:03.222 23:56:57 -- spdk/autotest.sh@394 -- # hostname 00:17:03.222 23:56:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:03.481 geninfo: WARNING: invalid characters removed from testname! 00:17:30.154 23:57:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:30.724 23:57:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:33.264 23:57:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:35.174 23:57:28 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:37.083 23:57:31 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:39.626 23:57:33 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:41.535 23:57:35 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:41.535 23:57:35 -- spdk/autorun.sh@1 -- $ timing_finish 00:17:41.535 23:57:35 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:17:41.535 23:57:35 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:41.535 23:57:35 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:41.535 23:57:35 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:41.535 + [[ -n 6173 ]] 00:17:41.535 + sudo kill 6173 00:17:41.545 [Pipeline] } 00:17:41.561 [Pipeline] // timeout 00:17:41.566 [Pipeline] } 00:17:41.580 [Pipeline] // stage 00:17:41.585 [Pipeline] } 00:17:41.598 [Pipeline] // catchError 00:17:41.606 [Pipeline] stage 00:17:41.608 [Pipeline] { (Stop VM) 00:17:41.620 [Pipeline] sh 00:17:41.903 + vagrant halt 00:17:44.441 ==> default: Halting domain... 00:17:52.608 [Pipeline] sh 00:17:52.892 + vagrant destroy -f 00:17:55.441 ==> default: Removing domain... 00:17:55.453 [Pipeline] sh 00:17:55.737 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:17:55.746 [Pipeline] } 00:17:55.760 [Pipeline] // stage 00:17:55.764 [Pipeline] } 00:17:55.785 [Pipeline] // dir 00:17:55.814 [Pipeline] } 00:17:55.835 [Pipeline] // wrap 00:17:55.838 [Pipeline] } 00:17:55.847 [Pipeline] // catchError 00:17:55.852 [Pipeline] stage 00:17:55.855 [Pipeline] { (Epilogue) 00:17:55.863 [Pipeline] sh 00:17:56.142 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:00.351 [Pipeline] catchError 00:18:00.353 [Pipeline] { 00:18:00.367 [Pipeline] sh 00:18:00.651 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:00.651 Artifacts sizes are good 00:18:00.659 [Pipeline] } 00:18:00.666 [Pipeline] // catchError 00:18:00.673 [Pipeline] archiveArtifacts 00:18:00.678 Archiving artifacts 00:18:00.778 [Pipeline] cleanWs 00:18:00.802 [WS-CLEANUP] Deleting project workspace... 00:18:00.802 [WS-CLEANUP] Deferred wipeout is used... 00:18:00.809 [WS-CLEANUP] done 00:18:00.811 [Pipeline] } 00:18:00.825 [Pipeline] // stage 00:18:00.830 [Pipeline] } 00:18:00.843 [Pipeline] // node 00:18:00.848 [Pipeline] End of Pipeline 00:18:00.897 Finished: SUCCESS